Sample records for evolved numerous times

  1. A Direct Numerical Simulation of a Temporally Evolving Liquid-Gas Turbulent Mixing Layer

    NASA Astrophysics Data System (ADS)

    Vu, Lam Xuan; Chiodi, Robert; Desjardins, Olivier

    2017-11-01

    Air-blast atomization occurs when streams of co-flowing high speed gas and low speed liquid shear to form drops. Air-blast atomization has numerous industrial applications from combustion engines in jets to sprays used for medical coatings. The high Reynolds number and dynamic pressure ratio of a realistic air-blast atomization case requires large eddy simulation and the use of multiphase sub-grid scale (SGS) models. A direct numerical simulations (DNS) of a temporally evolving mixing layer is presented to be used as a base case from which future multiphase SGS models can be developed. To construct the liquid-gas mixing layer, half of a channel flow from Kim et al. (JFM, 1987) is placed on top of a static liquid layer that then evolves over time. The DNS is performed using a conservative finite volume incompressible multiphase flow solver where phase tracking is handled with a discretely conservative volume of fluid method. This study presents statistics on velocity and volume fraction at different Reynolds and Weber numbers.

  2. Inference of Time-Evolving Coupled Dynamical Systems in the Presence of Noise

    NASA Astrophysics Data System (ADS)

    Stankovski, Tomislav; Duggento, Andrea; McClintock, Peter V. E.; Stefanovska, Aneta

    2012-07-01

    A new method is introduced for analysis of interactions between time-dependent coupled oscillators, based on the signals they generate. It distinguishes unsynchronized dynamics from noise-induced phase slips and enables the evolution of the coupling functions and other parameters to be followed. It is based on phase dynamics, with Bayesian inference of the time-evolving parameters achieved by shaping the prior densities to incorporate knowledge of previous samples. The method is tested numerically and applied to reveal and quantify the time-varying nature of cardiorespiratory interactions.

  3. Numerical Simulation of a Spatially Evolving Supersonic Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Erlebacher, G.

    2002-01-01

    The results from direct numerical simulations of a spatially evolving, supersonic, flat-plate turbulent boundary-layer flow, with free-stream Mach number of 2.25 are presented. The simulated flow field extends from a transition region, initiated by wall suction and blowing near the inflow boundary, into the fully turbulent regime. Distributions of mean and turbulent flow quantities are obtained and an analysis of these quantities is performed at a downstream station corresponding to Re(sub x)= 5.548 x10(exp 6) based on distance from the leading edge.

  4. Dynamical Bayesian inference of time-evolving interactions: from a pair of coupled oscillators to networks of oscillators.

    PubMed

    Duggento, Andrea; Stankovski, Tomislav; McClintock, Peter V E; Stefanovska, Aneta

    2012-12-01

    Living systems have time-evolving interactions that, until recently, could not be identified accurately from recorded time series in the presence of noise. Stankovski et al. [Phys. Rev. Lett. 109, 024101 (2012)] introduced a method based on dynamical Bayesian inference that facilitates the simultaneous detection of time-varying synchronization, directionality of influence, and coupling functions. It can distinguish unsynchronized dynamics from noise-induced phase slips. The method is based on phase dynamics, with Bayesian inference of the time-evolving parameters being achieved by shaping the prior densities to incorporate knowledge of previous samples. We now present the method in detail using numerically generated data, data from an analog electronic circuit, and cardiorespiratory data. We also generalize the method to encompass networks of interacting oscillators and thus demonstrate its applicability to small-scale networks.

  5. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo

    2018-05-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.

  6. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies.

    PubMed

    Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo

    2018-05-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.

  7. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies

    PubMed Central

    Lu, Gui-Min; Yu, Jian-Guo

    2018-01-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study. PMID:29892347

  8. Dynamical Bayesian inference of time-evolving interactions: From a pair of coupled oscillators to networks of oscillators

    NASA Astrophysics Data System (ADS)

    Duggento, Andrea; Stankovski, Tomislav; McClintock, Peter V. E.; Stefanovska, Aneta

    2012-12-01

    Living systems have time-evolving interactions that, until recently, could not be identified accurately from recorded time series in the presence of noise. Stankovski [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.024101 109, 024101 (2012)] introduced a method based on dynamical Bayesian inference that facilitates the simultaneous detection of time-varying synchronization, directionality of influence, and coupling functions. It can distinguish unsynchronized dynamics from noise-induced phase slips. The method is based on phase dynamics, with Bayesian inference of the time-evolving parameters being achieved by shaping the prior densities to incorporate knowledge of previous samples. We now present the method in detail using numerically generated data, data from an analog electronic circuit, and cardiorespiratory data. We also generalize the method to encompass networks of interacting oscillators and thus demonstrate its applicability to small-scale networks.

  9. Tracking Time Evolution of Collective Attention Clusters in Twitter: Time Evolving Nonnegative Matrix Factorisation.

    PubMed

    Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki

    2015-01-01

    Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.

  10. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  11. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE PAGES

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    2018-03-26

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  12. Singular perturbation of smoothly evolving Hele-Shaw solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siegel, M.; Tanveer, S.

    1996-01-01

    We present analytical scaling results, confirmed by accurate numerics, to show that there exists a class of smoothly evolving zero surface tension solutions to the Hele-Shaw problem that are significantly perturbed by an arbitrarily small amount of surface tension in order one time. {copyright} {ital 1996 The American Physical Society.}

  13. Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Hereford, James; Gwaltney, David

    2004-01-01

    In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.

  14. How Physician Perspectives on E-Prescribing Evolve over Time

    PubMed Central

    Patel, Vaishali; Pfoh, Elizabeth R.; Kaushal, Rainu

    2016-01-01

    Summary Background Physicians are expending tremendous resources transitioning to new electronic health records (EHRs), with electronic prescribing as a key functionality of most systems. Physician dissatisfaction post-transition can be quite marked, especially initially. However, little is known about how physicians’ experiences using new EHRs for e-prescribing evolve over time. We previously published a qualitative case study about the early physician experience transitioning from an older to a newer, more robust EHR, in the outpatient setting, focusing on their perceptions of the electronic prescribing functionality. Objective Our current objective was to examine how perceptions about using the new HER evolved over time, again with a focus on electronic prescribing. Methods We interviewed thirteen internists at an academic medical center-affiliated ambulatory care clinic who transitioned to the new EHR two years prior. We used a grounded theory approach to analyze semi-structured interviews and generate key themes. Results We identified five themes: efficiency and usability, effects on safety, ongoing training requirements, customization, and competing priorities for the EHR. We found that for even experienced e-prescribers, achieving prior levels of perceived prescribing efficiency took nearly two years. Despite the fact that speed in performing prescribing-related tasks was highly important, most were still not utilizing system short cuts or customization features designed to maximize efficiency. Alert fatigue remained common. However, direct transmission of prescriptions to pharmacies was highly valued and its benefits generally outweighed the other features considered poorly designed for physician workflow. Conclusions Ensuring that physicians are able to do key prescribing tasks efficiently is critical to the perceived value of e-prescribing applications. However, successful transitions may take longer than expected and e-prescribing system features that

  15. Quantifying selection in evolving populations using time-resolved genetic data

    NASA Astrophysics Data System (ADS)

    Illingworth, Christopher J. R.; Mustonen, Ville

    2013-01-01

    Methods which uncover the molecular basis of the adaptive evolution of a population address some important biological questions. For example, the problem of identifying genetic variants which underlie drug resistance, a question of importance for the treatment of pathogens, and of cancer, can be understood as a matter of inferring selection. One difficulty in the inference of variants under positive selection is the potential complexity of the underlying evolutionary dynamics, which may involve an interplay between several contributing processes, including mutation, recombination and genetic drift. A source of progress may be found in modern sequencing technologies, which confer an increasing ability to gather information about evolving populations, granting a window into these complex processes. One particularly interesting development is the ability to follow evolution as it happens, by whole-genome sequencing of an evolving population at multiple time points. We here discuss how to use time-resolved sequence data to draw inferences about the evolutionary dynamics of a population under study. We begin by reviewing our earlier analysis of a yeast selection experiment, in which we used a deterministic evolutionary framework to identify alleles under selection for heat tolerance, and to quantify the selection acting upon them. Considering further the use of advanced intercross lines to measure selection, we here extend this framework to cover scenarios of simultaneous recombination and selection, and of two driver alleles with multiple linked neutral, or passenger, alleles, where the driver pair evolves under an epistatic fitness landscape. We conclude by discussing the limitations of the approach presented and outlining future challenges for such methodologies.

  16. Landlab: A numerical modeling framework for evolving Earth surfaces from mountains to the coast

    NASA Astrophysics Data System (ADS)

    Gasparini, N. M.; Adams, J. M.; Tucker, G. E.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.

    2016-02-01

    Landlab is an open-source, user-friendly, component-based modeling framework for exploring the evolution of Earth's surface. Landlab itself is not a model. Instead, it is a computational framework that facilitates the development of numerical models of coupled earth surface processes. The Landlab Python library includes a gridding engine and process components, along with support functions for tasks such as reading in DEM data and input variables, setting boundary conditions, and plotting and outputting data. Each user of Landlab builds his or her own unique model. The first step in building a Landlab model is generally initializing a grid, either regular (raster) or irregular (e.g. delaunay or radial), and process components. This initialization process involves reading in relevant parameter values and data. The process components act on the grid to alter grid properties over time. For example, a component exists that can track the growth, death, and succession of vegetation over time. There are also several components that evolve surface elevation, through processes such as fluvial sediment transport and linear diffusion, among others. Users can also build their own process components, taking advantage of existing functions in Landlab such as those that identify grid connectivity and calculate gradients and flux divergence. The general nature of the framework makes it applicable to diverse environments - from bedrock rivers to a pile of sand - and processes acting over a range of spatial and temporal scales. In this poster we illustrate how a user builds a model using Landlab and propose a number of ways in which Landlab can be applied in coastal environments - from dune migration to channelization of barrier islands. We seek input from the coastal community as to how the process component library can be expanded to explore the diverse phenomena that act to shape coastal environments.

  17. Time-evolving genetic networks reveal a NAC troika that negatively regulates leaf senescence in Arabidopsis.

    PubMed

    Kim, Hyo Jung; Park, Ji-Hwan; Kim, Jingil; Kim, Jung Ju; Hong, Sunghyun; Kim, Jeongsik; Kim, Jin Hee; Woo, Hye Ryun; Hyeon, Changbong; Lim, Pyung Ok; Nam, Hong Gil; Hwang, Daehee

    2018-05-22

    Senescence is controlled by time-evolving networks that describe the temporal transition of interactions among senescence regulators. Here, we present time-evolving networks for NAM/ATAF/CUC (NAC) transcription factors in Arabidopsis during leaf aging. The most evident characteristic of these time-dependent networks was a shift from positive to negative regulation among NACs at a presenescent stage. ANAC017, ANAC082, and ANAC090, referred to as a "NAC troika," govern the positive-to-negative regulatory shift. Knockout of the NAC troika accelerated senescence and the induction of other NAC s, whereas overexpression of the NAC troika had the opposite effects. Transcriptome and molecular analyses revealed shared suppression of senescence-promoting processes by the NAC troika, including salicylic acid (SA) and reactive oxygen species (ROS) responses, but with predominant regulation of SA and ROS responses by ANAC090 and ANAC017, respectively. Our time-evolving networks provide a unique regulatory module of presenescent repressors that direct the timely induction of senescence-promoting processes at the presenescent stage of leaf aging. Copyright © 2018 the Author(s). Published by PNAS.

  18. Time-evolving genetic networks reveal a NAC troika that negatively regulates leaf senescence in Arabidopsis

    PubMed Central

    Kim, Hyo Jung; Park, Ji-Hwan; Kim, Jingil; Kim, Jung Ju; Hong, Sunghyun; Kim, Jin Hee; Woo, Hye Ryun; Lim, Pyung Ok; Nam, Hong Gil; Hwang, Daehee

    2018-01-01

    Senescence is controlled by time-evolving networks that describe the temporal transition of interactions among senescence regulators. Here, we present time-evolving networks for NAM/ATAF/CUC (NAC) transcription factors in Arabidopsis during leaf aging. The most evident characteristic of these time-dependent networks was a shift from positive to negative regulation among NACs at a presenescent stage. ANAC017, ANAC082, and ANAC090, referred to as a “NAC troika,” govern the positive-to-negative regulatory shift. Knockout of the NAC troika accelerated senescence and the induction of other NACs, whereas overexpression of the NAC troika had the opposite effects. Transcriptome and molecular analyses revealed shared suppression of senescence-promoting processes by the NAC troika, including salicylic acid (SA) and reactive oxygen species (ROS) responses, but with predominant regulation of SA and ROS responses by ANAC090 and ANAC017, respectively. Our time-evolving networks provide a unique regulatory module of presenescent repressors that direct the timely induction of senescence-promoting processes at the presenescent stage of leaf aging. PMID:29735710

  19. Real-time visualization of soliton molecules with evolving behavior in an ultrafast fiber laser

    NASA Astrophysics Data System (ADS)

    Liu, Meng; Li, Heng; Luo, Ai-Ping; Cui, Hu; Xu, Wen-Cheng; Luo, Zhi-Chao

    2018-03-01

    Ultrafast fiber lasers have been demonstrated to be great platforms for the investigation of soliton dynamics. The soliton molecules, as one of the most fascinating nonlinear phenomena, have been a hot topic in the field of nonlinear optics in recent years. Herein, we experimentally observed the real-time evolving behavior of soliton molecule in an ultrafast fiber laser by using the dispersive Fourier transformation technology. Several types of evolving soliton molecules were obtained in our experiments, such as soliton molecules with monotonically or chaotically evolving phase, flipping and hopping phase. These results would be helpful to the communities interested in soliton nonlinear dynamics as well as ultrafast laser technologies.

  20. The evolving block universe and the meshing together of times.

    PubMed

    Ellis, George F R

    2014-10-01

    It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.

  1. Time-evolving bubbles in two-dimensional stokes flow

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh; Vasconcelos, Giovani L.

    1994-01-01

    A general class of exact solutions is presented for a time evolving bubble in a two-dimensional slow viscous flow in the presence of surface tension. These solutions can describe a bubble in a linear shear flow as well as an expanding or contracting bubble in an otherwise quiescent flow. In the case of expanding bubbles, the solutions have a simple behavior in the sense that for essentially arbitrary initial shapes the bubble will asymptote an expanding circle. Contracting bubbles, on the other hand, can develop narrow structures ('near-cusps') on the interface and may undergo 'break up' before all the bubble-fluid is completely removed. The mathematical structure underlying the existence of these exact solutions is also investigated.

  2. Proximate effects of temperature versus evolved intrinsic constraints for embryonic development times among temperate and tropical songbirds

    USGS Publications Warehouse

    Ton, Riccardo; Martin, Thomas E.

    2017-01-01

    The relative importance of intrinsic constraints imposed by evolved physiological trade-offs versus the proximate effects of temperature for interspecific variation in embryonic development time remains unclear. Understanding this distinction is important because slow development due to evolved trade-offs can yield phenotypic benefits, whereas slow development from low temperature can yield costs. We experimentally increased embryonic temperature in free-living tropical and north temperate songbird species to test these alternatives. Warmer temperatures consistently shortened development time without costs to embryo mass or metabolism. However, proximate effects of temperature played an increasingly stronger role than intrinsic constraints for development time among species with colder natural incubation temperatures. Long development times of tropical birds have been thought to primarily reflect evolved physiological trade-offs that facilitate their greater longevity. In contrast, our results indicate a much stronger role of temperature in embryonic development time than currently thought.

  3. Maxwell's demons everywhere: evolving design as the arrow of time.

    PubMed

    Bejan, Adrian

    2014-02-10

    Science holds that the arrow of time in nature is imprinted on one-way (irreversible) phenomena, and is accounted for by the second law of thermodynamics. Here I show that the arrow of time is painted much more visibly on another self-standing phenomenon: the occurrence and change (evolution in time) of flow organization throughout nature, animate and inanimate. This other time arrow has been present in science but not recognized as such since the birth of thermodynamics. It is Maxwell's demon. Translated in macroscopic terms, this is the physics of the phenomenon of design, which is the universal natural tendency of flow systems to evolve into configurations that provide progressively greater access over time, and is summarized as the constructal law of design and evolution in nature. Knowledge is the ability to effect design changes that facilitate human flows on the landscape. Knowledge too flows.

  4. Generalized Robertson-Walker Space-Time Admitting Evolving Null Horizons Related to a Black Hole Event Horizon.

    PubMed

    Duggal, K L

    2016-01-01

    A new technique is used to study a family of time-dependent null horizons, called " Evolving Null Horizons " (ENHs), of generalized Robertson-Walker (GRW) space-time [Formula: see text] such that the metric [Formula: see text] satisfies a kinematic condition. This work is different from our early papers on the same issue where we used (1 + n )-splitting space-time but only some special subcases of GRW space-time have this formalism. Also, in contrast to previous work, we have proved that each member of ENHs is totally umbilical in [Formula: see text]. Finally, we show that there exists an ENH which is always a null horizon evolving into a black hole event horizon and suggest some open problems.

  5. Real time wave forecasting using wind time history and numerical model

    NASA Astrophysics Data System (ADS)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  6. Generalized Robertson-Walker Space-Time Admitting Evolving Null Horizons Related to a Black Hole Event Horizon

    PubMed Central

    2016-01-01

    A new technique is used to study a family of time-dependent null horizons, called “Evolving Null Horizons” (ENHs), of generalized Robertson-Walker (GRW) space-time (M¯,g¯) such that the metric g¯ satisfies a kinematic condition. This work is different from our early papers on the same issue where we used (1 + n)-splitting space-time but only some special subcases of GRW space-time have this formalism. Also, in contrast to previous work, we have proved that each member of ENHs is totally umbilical in (M¯,g¯). Finally, we show that there exists an ENH which is always a null horizon evolving into a black hole event horizon and suggest some open problems. PMID:27722202

  7. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  8. Time's arrow: A numerical experiment

    NASA Astrophysics Data System (ADS)

    Fowles, G. Richard

    1994-04-01

    The dependence of time's arrow on initial conditions is illustrated by a numerical example in which plane waves produced by an initial pressure pulse are followed as they are multiply reflected at internal interfaces of a layered medium. Wave interactions at interfaces are shown to be analogous to the retarded and advanced waves of point sources. The model is linear and the calculation is exact and demonstrably time reversible; nevertheless the results show most of the features expected of a macroscopically irreversible system, including the approach to the Maxwell-Boltzmann distribution, ergodicity, and concomitant entropy increase.

  9. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    NASA Astrophysics Data System (ADS)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  10. Using a numerical model to understand the connection between the ocean and acoustic travel-time measurements.

    PubMed

    Powell, Brian S; Kerry, Colette G; Cornuelle, Bruce D

    2013-10-01

    Measurements of acoustic ray travel-times in the ocean provide synoptic integrals of the ocean state between source and receiver. It is known that the ray travel-time is sensitive to variations in the ocean at the transmission time, but the sensitivity of the travel-time to spatial variations in the ocean prior to the acoustic transmission have not been quantified. This study examines the sensitivity of ray travel-time to the temporally and spatially evolving ocean state in the Philippine Sea using the adjoint of a numerical model. A one year series of five day backward integrations of the adjoint model quantify the sensitivity of travel-times to varying dynamics that can alter the travel-time of a 611 km ray by 200 ms. The early evolution of the sensitivities reveals high-mode internal waves that dissipate quickly, leaving the lowest three modes, providing a connection to variations in the internal tide generation prior to the sample time. They are also strongly sensitive to advective effects that alter density along the ray path. These sensitivities reveal how travel-time measurements are affected by both nearby and distant waters. Temporal nonlinearity of the sensitivities suggests that prior knowledge of the ocean state is necessary to exploit the travel-time observations.

  11. Line of duty firefighter fatalities: an evolving trend over time.

    PubMed

    Kahn, Steven A; Woods, Jason; Rae, Lisa

    2015-01-01

    Between 1990 and 2012, 2775 firefighters were killed in the line of duty. Myocardial infarction (MI) was responsible for approximately 40% of these mortalities, followed by mechanical trauma, asphyxiation, and burns. Protective gear, safety awareness, medical care, and the age of the workforce have evolved since 1990, possibly affecting the nature of mortality during this 22-year time period. The purpose of this study is to determine whether the causes of firefighter mortality have changed over time to allow a targeted focus in prevention efforts. The U.S. Fire Administration fatality database was queried for all-cause on-duty mortality between 1990 to 2000 and 2002 to 2012. The year 2001 was excluded due to inability to eliminate the 347 deaths that occurred on September 11. Data collected included age range at the time of fatality (exact age not included in report), type of duty (on-scene fire, responding, training, and returning), incident type (structure fire, motor vehicle crash, etc), and nature of fatality (MI, trauma, asphyxiation, cerebrovascular accident [CVA], and burns). Data were compared between the two time periods with a χ test. Between 1990 and 2000, 1140 firefighters sustained a fatal injury while on duty, and 1174 were killed during 2002 to 2012. MI has increased from 43% to 46.5% of deaths (P = .012) between the 2 decades. CVA has increased from 1.6% to 3.7% of deaths (P = .002). Asphyxiation has decreased from 12.1% to 7.9% (P = .003) and burns have decreased from 7.7% to 3.9% (P = .0004). Electrocution is down from 1.8% to 0.5% (P = .004). Death from trauma was unchanged (27.8 to 29.6%, P = .12). The percentage of fatalities of firefighters over age 40 years has increased from 52% to 65% (P = .0001). Fatality by sex was constant at 3% female. Fatalities during training have increased from 7.3% to 11.2% of deaths (P = .00001). The nature of firefighter mortality has evolved over time. In the current decade, line-of-duty mortality is more

  12. Complex network view of evolving manifolds

    NASA Astrophysics Data System (ADS)

    da Silva, Diamantino C.; Bianconi, Ginestra; da Costa, Rui A.; Dorogovtsev, Sergey N.; Mendes, José F. F.

    2018-03-01

    We study complex networks formed by triangulations and higher-dimensional simplicial complexes representing closed evolving manifolds. In particular, for triangulations, the set of possible transformations of these networks is restricted by the condition that at each step, all the faces must be triangles. Stochastic application of these operations leads to random networks with different architectures. We perform extensive numerical simulations and explore the geometries of growing and equilibrium complex networks generated by these transformations and their local structural properties. This characterization includes the Hausdorff and spectral dimensions of the resulting networks, their degree distributions, and various structural correlations. Our results reveal a rich zoo of architectures and geometries of these networks, some of which appear to be small worlds while others are finite dimensional with Hausdorff dimension equal or higher than the original dimensionality of their simplices. The range of spectral dimensions of the evolving triangulations turns out to be from about 1.4 to infinity. Our models include simplicial complexes representing manifolds with evolving topologies, for example, an h -holed torus with a progressively growing number of holes. This evolving graph demonstrates features of a small-world network and has a particularly heavy-tailed degree distribution.

  13. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  14. Evolving Digital Ecological Networks

    PubMed Central

    Wagner, Aaron P.; Ofria, Charles

    2013-01-01

    “It is hard to realize that the living world as we know it is just one among many possibilities” [1]. Evolving digital ecological networks are webs of interacting, self-replicating, and evolving computer programs (i.e., digital organisms) that experience the same major ecological interactions as biological organisms (e.g., competition, predation, parasitism, and mutualism). Despite being computational, these programs evolve quickly in an open-ended way, and starting from only one or two ancestral organisms, the formation of ecological networks can be observed in real-time by tracking interactions between the constantly evolving organism phenotypes. These phenotypes may be defined by combinations of logical computations (hereafter tasks) that digital organisms perform and by expressed behaviors that have evolved. The types and outcomes of interactions between phenotypes are determined by task overlap for logic-defined phenotypes and by responses to encounters in the case of behavioral phenotypes. Biologists use these evolving networks to study active and fundamental topics within evolutionary ecology (e.g., the extent to which the architecture of multispecies networks shape coevolutionary outcomes, and the processes involved). PMID:23533370

  15. Spin-orbit coupling for tidally evolving super-Earths

    NASA Astrophysics Data System (ADS)

    Rodríguez, A.; Callegari, N.; Michtchenko, T. A.; Hussmann, H.

    2012-12-01

    We investigate the spin behaviour of close-in rocky planets and the implications for their orbital evolution. Considering that the planet rotation evolves under simultaneous actions of the torque due to the equatorial deformation and the tidal torque, both raised by the central star, we analyse the possibility of temporary captures in spin-orbit resonances. The results of the numerical simulations of the exact equations of motions indicate that, whenever the planet rotation is trapped in a resonant motion, the orbital decay and the eccentricity damping are faster than the ones in which the rotation follows the so-called pseudo-synchronization. Analytical results obtained through the averaged equations of the spin-orbit problem show a good agreement with the numerical simulations. We apply the analysis to the cases of the recently discovered hot super-Earths Kepler-10 b, GJ 3634 b and 55 Cnc e. The simulated dynamical history of these systems indicates the possibility of capture in several spin-orbit resonances; particularly, GJ 3634 b and 55 Cnc e can currently evolve under a non-synchronous resonant motion for suitable values of the parameters. Moreover, 55 Cnc e may avoid a chaotic rotation behaviour by evolving towards synchronization through successive temporary resonant trappings.

  16. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  17. Nanocalorimetry-coupled time-of-flight mass spectrometry: identifying evolved species during high-rate thermal measurements.

    PubMed

    Yi, Feng; DeLisio, Jeffery B; Zachariah, Michael R; LaVan, David A

    2015-10-06

    We report on measurements integrating a nanocalorimeter sensor into a time-of-flight mass spectrometer (TOFMS) for simultaneous thermal and speciation measurements at high heating rates. The nanocalorimeter sensor was incorporated into the extraction region of the TOFMS system to provide sample heating and thermal information essentially simultaneously with the evolved species identification. This approach can be used to measure chemical reactions and evolved species for a variety of materials. Furthermore, since the calorimetry is conducted within the same proximal volume as ionization and ion extraction, evolved species detected are in a collision-free environment, and thus, the possibility exists to interrogate intermediate and radical species. We present measurements showing the decomposition of ammonium perchlorate, copper oxide nanoparticles, and sodium azotetrazolate. The rapid, controlled, and quantifiable heating rate capabilities of the nanocalorimeter coupled with the 0.1 ms temporal resolution of the TOFMS provides a new measurement capability and insight into high-rate reactions, such as those seen with reactive and energetic materials, and adsorption\\desorption measurements, critical for understanding surface chemistry and accelerating catalyst selection.

  18. Changes of scaling relationships in an evolving population: The example of "sedimentary" stylolites

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Korneva, I.; Nixon, C. W.; Rotevatn, A.

    2017-03-01

    Bed-parallel (;sedimentary;) stylolites are used as an example of a population that evolves by the addition of new components, their growth and their merger. It is shown that this style of growth controls the changes in the scaling relationships of the population. Stylolites tend to evolve in carbonate rocks through time, for example by compaction during progressive burial. The evolution of a population of stylolites, and their likely effects on porosity, are demonstrated using simple numerical models. Starting with a power-law distribution, the adding of new stylolites, the increase in their amplitudes and their merger decrease the slope of magnitude versus cumulative frequency of the population. The population changes to a non-power-law distribution as smaller stylolites merge to form larger stylolites. The results suggest that other populations can be forward- or backward-modelled, such as fault lengths, which also evolve by the addition of components, their growth and merger. Consideration of the ways in which populations change improves understanding of scaling relationships and vice versa, and would assist in the management of geofluid reservoirs.

  19. Numerical bifurcation analysis of immunological models with time delays

    NASA Astrophysics Data System (ADS)

    Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

    2005-12-01

    In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

  20. Evolving from Planning and Scheduling to Real-Time Operations Support: Design Challenges

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Ludowise, Melissa; McCurdy, Michael; Li, Jack

    2010-01-01

    Versions of Scheduling and Planning Interface for Exploration (SPIFe) have supported a variety of mission operations across NASA. This software tool has evolved and matured over several years, assisting planners who develop intricate schedules. While initially conceived for surface Mars missions, SPIFe has been deployed in other domains, where people rather than robotic explorers, execute plans. As a result, a diverse set of end-users has compelled growth in a new direction: supporting real-time operations. This paper describes the new needs and challenges that accompany this development. Among the key features that have been built for SPIFe are current time indicators integrated into the interface and timeline, as well as other plan attributes that enable execution of scheduled activities. Field tests include mission support for the Lunar CRater Observation and Sensing Satellite (LCROSS), NASA Extreme Environment Mission Operations (NEEMO) and Desert Research and Technology Studies (DRATS) campaigns.

  1. Long-range correlations in time series generated by time-fractional diffusion: A numerical study

    NASA Astrophysics Data System (ADS)

    Barbieri, Davide; Vivoli, Alessandro

    2005-09-01

    Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.

  2. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  3. New strategy to identify radicals in a time evolving EPR data set by multivariate curve resolution-alternating least squares.

    PubMed

    Fadel, Maya Abou; de Juan, Anna; Vezin, Hervé; Duponchel, Ludovic

    2016-12-01

    Electron paramagnetic resonance (EPR) spectroscopy is a powerful technique that is able to characterize radicals formed in kinetic reactions. However, spectral characterization of individual chemical species is often limited or even unmanageable due to the severe kinetic and spectral overlap among species in kinetic processes. Therefore, we applied, for the first time, multivariate curve resolution-alternating least squares (MCR-ALS) method to EPR time evolving data sets to model and characterize the different constituents in a kinetic reaction. Here we demonstrate the advantage of multivariate analysis in the investigation of radicals formed along the kinetic process of hydroxycoumarin in alkaline medium. Multiset analysis of several EPR-monitored kinetic experiments performed in different conditions revealed the individual paramagnetic centres as well as their kinetic profiles. The results obtained by MCR-ALS method demonstrate its prominent potential in analysis of EPR time evolved spectra. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    PubMed Central

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the

  5. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    PubMed

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that

  6. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.

    PubMed

    Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q

    2013-03-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

  7. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION

    PubMed Central

    Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.

    2013-01-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179

  8. Numerical simulation of the generation, propagation, and diffraction of nonlinear waves in a rectangular basin: A three-dimensional numerical wave tank

    NASA Astrophysics Data System (ADS)

    Darwiche, Mahmoud Khalil M.

    The research presented herein is a contribution to the understanding of the numerical modeling of fully nonlinear, transient water waves. The first part of the work involves the development of a time-domain model for the numerical generation of fully nonlinear, transient waves by a piston type wavemaker in a three-dimensional, finite, rectangular tank. A time-domain boundary-integral model is developed for simulating the evolving fluid field. A robust nonsingular, adaptive integration technique for the assembly of the boundary-integral coefficient matrix is developed and tested. A parametric finite-difference technique for calculating the fluid- particle kinematics is also developed and tested. A novel compatibility and continuity condition is implemented to minimize the effect of the singularities that are inherent at the intersections of the various Dirichlet and/or Neumann subsurfaces. Results are presented which demonstrate the accuracy and convergence of the numerical model. The second portion of the work is a study of the interaction of the numerically-generated, fully nonlinear, transient waves with a bottom-mounted, surface-piercing, vertical, circular cylinder. The numerical model developed in the first part of this dissertation is extended to include the presence of the cylinder at the centerline of the basin. The diffraction of the numerically generated waves by the cylinder is simulated, and the particle kinematics of the diffracted flow field are calculated and reported. Again, numerical results showing the accuracy and convergence of the extended model are presented.

  9. Numerical solution of the two-dimensional time-dependent incompressible Euler equations

    NASA Technical Reports Server (NTRS)

    Whitfield, David L.; Taylor, Lafayette K.

    1994-01-01

    A numerical method is presented for solving the artificial compressibility form of the 2D time-dependent incompressible Euler equations. The approach is based on using an approximate Riemann solver for the cell face numerical flux of a finite volume discretization. Characteristic variable boundary conditions are developed and presented for all boundaries and in-flow out-flow situations. The system of algebraic equations is solved using the discretized Newton-relaxation (DNR) implicit method. Numerical results are presented for both steady and unsteady flow.

  10. Compact configurations within small evolving groups of galaxies

    NASA Astrophysics Data System (ADS)

    Mamon, G. A.

    Small virialized groups of galaxies are evolved with a gravitational N-body code, where the galaxies and a diffuse background are treated as single particles, but with mass and luminosity profiles attached, which enbles the estimation of parameters such as internal energies, half-mass radii, and the softened potential energies of interaction. The numerical treatment includes mergers, collisional stripping, tidal limitation by the mean-field of the background (evaluated using a combination of instantaneous and impulsive formulations), galaxy heating from collisons, and background heating from dynamical friction. The groups start out either as dense as appear the groups in Hickson's (1982) catalog, or as loose as appear those in Turner and Gott's (1976a) catalog, and they are simulated many times (usually 20) with different initial positions and velocities. Dense groups of galaxies with massive dark haloes coalesce into a single galaxy and lose their compact group appearance in approximately 3 group half-mass crossing times, while dense groups of galaxies without massive haloes survive the merger instability for 15 half-mass crossing times (in a more massive background to keep the same total group mass).

  11. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used

  12. Lower mass limit of an evolving interstellar cloud and chemistry in an evolving oscillatory cloud

    NASA Technical Reports Server (NTRS)

    Tarafdar, S. P.

    1986-01-01

    Simultaneous solution of the equation of motion, equation of state and energy equation including heating and cooling processes for interstellar medium gives for a collapsing cloud a lower mass limit which is significantly smaller than the Jeans mass for the same initial density. The clouds with higher mass than this limiting mass collapse whereas clouds with smaller than critical mass pass through a maximum central density giving apparently similar clouds (i.e., same Av, size and central density) at two different phases of its evolution (i.e., with different life time). Preliminary results of chemistry in such an evolving oscillatory cloud show significant difference in abundances of some of the molecules in two physically similar clouds with different life times. The problems of depletion and short life time of evolving clouds appear to be less severe in such an oscillatory cloud.

  13. Boom and bust in continuous time evolving economic model

    NASA Astrophysics Data System (ADS)

    Mitchell, L.; Ackland, G. J.

    2009-08-01

    We show that a simple model of a spatially resolved evolving economic system, which has a steady state under simultaneous updating, shows stable oscillations in price when updated asynchronously. The oscillations arise from a gradual decline of the mean price due to competition among sellers competing for the same resource. This lowers profitability and hence population but is followed by a sharp rise as speculative sellers invade the large un-inhabited areas. This cycle then begins again.

  14. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  15. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  16. The evolving energy budget of accretionary wedges

    NASA Astrophysics Data System (ADS)

    McBeck, Jessica; Cooke, Michele; Maillot, Bertrand; Souloumiac, Pauline

    2017-04-01

    The energy budget of evolving accretionary systems reveals how deformational processes partition energy as faults slip, topography uplifts, and layer-parallel shortening produces distributed off-fault deformation. The energy budget provides a quantitative framework for evaluating the energetic contribution or consumption of diverse deformation mechanisms. We investigate energy partitioning in evolving accretionary prisms by synthesizing data from physical sand accretion experiments and numerical accretion simulations. We incorporate incremental strain fields and cumulative force measurements from two suites of experiments to design numerical simulations that represent accretionary wedges with stronger and weaker detachment faults. One suite of the physical experiments includes a basal glass bead layer and the other does not. Two physical experiments within each suite implement different boundary conditions (stable base versus moving base configuration). Synthesizing observations from the differing base configurations reduces the influence of sidewall friction because the force vector produced by sidewall friction points in opposite directions depending on whether the base is fixed or moving. With the numerical simulations, we calculate the energy budget at two stages of accretion: at the maximum force preceding the development of the first thrust pair, and at the minimum force following the development of the pair. To identify the appropriate combination of material and fault properties to apply in the simulations, we systematically vary the Young's modulus and the fault static and dynamic friction coefficients in numerical accretion simulations, and identify the set of parameters that minimizes the misfit between the normal force measured on the physical backwall and the numerically simulated force. Following this derivation of the appropriate material and fault properties, we calculate the components of the work budget in the numerical simulations and in the

  17. Ranking in evolving complex networks

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang

    2017-05-01

    Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.

  18. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  19. Have plants evolved to self-immolate?

    PubMed Central

    Bowman, David M. J. S.; French, Ben J.; Prior, Lynda D.

    2014-01-01

    By definition fire prone ecosystems have highly combustible plants, leading to the hypothesis, first formally stated by Mutch in 1970, that community flammability is the product of natural selection of flammable traits. However, proving the “Mutch hypothesis” has presented an enormous challenge for fire ecologists given the difficulty in establishing cause and effect between landscape fire and flammable plant traits. Individual plant traits (such as leaf moisture content, retention of dead branches and foliage, oil rich foliage) are known to affect the flammability of plants but there is no evidence these characters evolved specifically to self-immolate, although some of these traits may have been secondarily modified to increase the propensity to burn. Demonstrating individual benefits from self-immolation is extraordinarily difficult, given the intersection of the physical environmental factors that control landscape fire (fuel production, dryness and ignitions) with community flammability properties that emerge from numerous traits of multiple species (canopy cover and litter bed bulk density). It is more parsimonious to conclude plants have evolved mechanisms to tolerate, but not promote, landscape fire. PMID:25414710

  20. SALT Spectroscopy of Evolved Massive Stars

    NASA Astrophysics Data System (ADS)

    Kniazev, A. Y.; Gvaramadze, V. V.; Berdnikov, L. N.

    2017-06-01

    Long-slit spectroscopy with the Southern African Large Telescope (SALT) of central stars of mid-infrared nebulae detected with the Spitzer Space Telescope and Wide-Field Infrared Survey Explorer (WISE) led to the discovery of numerous candidate luminous blue variables (cLBVs) and other rare evolved massive stars. With the recent advent of the SALT fiber-fed high-resolution echelle spectrograph (HRS), a new perspective for the study of these interesting objects is appeared. Using the HRS we obtained spectra of a dozen newly identified massive stars. Some results on the recently identified cLBV Hen 3-729 are presented.

  1. Evolvability Is an Evolved Ability: The Coding Concept as the Arch-Unit of Natural Selection.

    PubMed

    Janković, Srdja; Ćirković, Milan M

    2016-03-01

    Physical processes that characterize living matter are qualitatively distinct in that they involve encoding and transfer of specific types of information. Such information plays an active part in the control of events that are ultimately linked to the capacity of the system to persist and multiply. This algorithmicity of life is a key prerequisite for its Darwinian evolution, driven by natural selection acting upon stochastically arising variations of the encoded information. The concept of evolvability attempts to define the total capacity of a system to evolve new encoded traits under appropriate conditions, i.e., the accessible section of total morphological space. Since this is dependent on previously evolved regulatory networks that govern information flow in the system, evolvability itself may be regarded as an evolved ability. The way information is physically written, read and modified in living cells (the "coding concept") has not changed substantially during the whole history of the Earth's biosphere. This biosphere, be it alone or one of many, is, accordingly, itself a product of natural selection, since the overall evolvability conferred by its coding concept (nucleic acids as information carriers with the "rulebook of meanings" provided by codons, as well as all the subsystems that regulate various conditional information-reading modes) certainly played a key role in enabling this biosphere to survive up to the present, through alterations of planetary conditions, including at least five catastrophic events linked to major mass extinctions. We submit that, whatever the actual prebiotic physical and chemical processes may have been on our home planet, or may, in principle, occur at some time and place in the Universe, a particular coding concept, with its respective potential to give rise to a biosphere, or class of biospheres, of a certain evolvability, may itself be regarded as a unit (indeed the arch-unit) of natural selection.

  2. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  3. Chronic subdural haematoma evolving from traumatic subdural hydroma.

    PubMed

    Wang, Yaodong; Wang, Chuanwei; Liu, Yuguang

    2015-01-01

    This study aimed to investigate the incidence and clinical characteristics of chronic subdural haematoma (CSDH) evolving from traumatic subdual hydroma (TSH). The clinical characteristics of 44 patients with CSDH evolving from TSH were analysed retrospectively and the relevant literature was reviewed. In 22.6% of patients, TSH evolved into CSDH. The time required for this evolution was 14-100 days after injury. All patients were cured with haematoma drainage. TSH is one possible origin of CSDH. The clinical characteristics of TSH evolving into CSDH include polarization of patient age and chronic small effusion. The injuries usually occur during deceleration and are accompanied by mild cerebral damage.

  4. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  5. Distributed numerical controllers

    NASA Astrophysics Data System (ADS)

    Orban, Peter E.

    2001-12-01

    While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.

  6. Analysis of Time Filters in Multistep Methods

    NASA Astrophysics Data System (ADS)

    Hurl, Nicholas

    Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.

  7. Control of reaction-diffusion equations on time-evolving manifolds.

    PubMed

    Rossi, Francesco; Duteil, Nastassia Pouradier; Yakoby, Nir; Piccoli, Benedetto

    2016-12-01

    Among the main actors of organism development there are morphogens, which are signaling molecules diffusing in the developing organism and acting on cells to produce local responses. Growth is thus determined by the distribution of such signal. Meanwhile, the diffusion of the signal is itself affected by the changes in shape and size of the organism. In other words, there is a complete coupling between the diffusion of the signal and the change of the shapes. In this paper, we introduce a mathematical model to investigate such coupling. The shape is given by a manifold, that varies in time as the result of a deformation given by a transport equation. The signal is represented by a density, diffusing on the manifold via a diffusion equation. We show the non-commutativity of the transport and diffusion evolution by introducing a new concept of Lie bracket between the diffusion and the transport operator. We also provide numerical simulations showing this phenomenon.

  8. Numerical simulations to the nonlinear model of interpersonal relationships with time fractional derivative

    NASA Astrophysics Data System (ADS)

    Gencoglu, Muharrem Tuncay; Baskonus, Haci Mehmet; Bulut, Hasan

    2017-01-01

    The main aim of this manuscript is to obtain numerical solutions for the nonlinear model of interpersonal relationships with time fractional derivative. The variational iteration method is theoretically implemented and numerically conducted only to yield the desired solutions. Numerical simulations of desired solutions are plotted by using Wolfram Mathematica 9. The authors would like to thank the reviewers for their comments that help improve the manuscript.

  9. How Hierarchical Topics Evolve in Large Text Corpora.

    PubMed

    Cui, Weiwei; Liu, Shixia; Wu, Zhuofeng; Wei, Hao

    2014-12-01

    Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.

  10. Resiliently evolving supply-demand networks

    NASA Astrophysics Data System (ADS)

    Rubido, Nicolás; Grebogi, Celso; Baptista, Murilo S.

    2014-01-01

    The ability to design a transport network such that commodities are brought from suppliers to consumers in a steady, optimal, and stable way is of great importance for distribution systems nowadays. In this work, by using the circuit laws of Kirchhoff and Ohm, we provide the exact capacities of the edges that an optimal supply-demand network should have to operate stably under perturbations, i.e., without overloading. The perturbations we consider are the evolution of the connecting topology, the decentralization of hub sources or sinks, and the intermittence of supplier and consumer characteristics. We analyze these conditions and the impact of our results, both on the current United Kingdom power-grid structure and on numerically generated evolving archetypal network topologies.

  11. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    NASA Astrophysics Data System (ADS)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  12. Time-domain simulation of damped impacted plates. II. Numerical model and results.

    PubMed

    Lambourg, C; Chaigne, A; Matignon, D

    2001-04-01

    A time-domain model for the flexural vibrations of damped plates was presented in a companion paper [Part I, J. Acoust. Soc. Am. 109, 1422-1432 (2001)]. In this paper (Part II), the damped-plate model is extended to impact excitation, using Hertz's law of contact, and is solved numerically in order to synthesize sounds. The numerical method is based on the use of a finite-difference scheme of second order in time and fourth order in space. As a consequence of the damping terms, the stability and dispersion properties of this scheme are modified, compared to the undamped case. The numerical model is used for the time-domain simulation of vibrations and sounds produced by impact on isotropic and orthotropic plates made of various materials (aluminum, glass, carbon fiber and wood). The efficiency of the method is validated by comparisons with analytical and experimental data. The sounds produced show a high degree of similarity with real sounds and allow a clear recognition of each constitutive material of the plate without ambiguity.

  13. Evolvable synthetic neural system

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  14. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  15. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  16. Direct numerical simulations and modeling of a spatially-evolving turbulent wake

    NASA Technical Reports Server (NTRS)

    Cimbala, John M.

    1994-01-01

    Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with

  17. A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI

    PubMed Central

    Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.

    2016-01-01

    Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244

  18. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  19. Two-fluid Numerical Simulations of Solar Spicules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuźma, Błażej; Murawski, Kris; Kayshap, Pradeep

    2017-11-10

    We aim to study the formation and evolution of solar spicules by means of numerical simulations of the solar atmosphere. With the use of newly developed JOANNA code, we numerically solve two-fluid (for ions + electrons and neutrals) equations in 2D Cartesian geometry. We follow the evolution of a spicule triggered by the time-dependent signal in ion and neutral components of gas pressure launched in the upper chromosphere. We use the potential magnetic field, which evolves self-consistently, but mainly plays a passive role in the dynamics. Our numerical results reveal that the signal is steepened into a shock that propagatesmore » upward into the corona. The chromospheric cold and dense plasma lags behind this shock and rises into the corona with a mean speed of 20–25 km s{sup −1}. The formed spicule exhibits the upflow/downfall of plasma during its total lifetime of around 3–4 minutes, and it follows the typical characteristics of a classical spicule, which is modeled by magnetohydrodynamics. The simulated spicule consists of a dense and cold core that is dominated by neutrals. The general dynamics of ion and neutral spicules are very similar to each other. Minor differences in those dynamics result in different widths of both spicules with increasing rarefaction of the ion spicule in time.« less

  20. JavaGenes: Evolving Graphs with Crossover

    NASA Technical Reports Server (NTRS)

    Globus, Al; Atsatt, Sean; Lawton, John; Wipke, Todd

    2000-01-01

    Genetic algorithms usually use string or tree representations. We have developed a novel crossover operator for a directed and undirected graph representation, and used this operator to evolve molecules and circuits. Unlike strings or trees, a single point in the representation cannot divide every possible graph into two parts, because graphs may contain cycles. Thus, the crossover operator is non-trivial. A steady-state, tournament selection genetic algorithm code (JavaGenes) was written to implement and test the graph crossover operator. All runs were executed by cycle-scavagging on networked workstations using the Condor batch processing system. The JavaGenes code has evolved pharmaceutical drug molecules and simple digital circuits. Results to date suggest that JavaGenes can evolve moderate sized drug molecules and very small circuits in reasonable time. The algorithm has greater difficulty with somewhat larger circuits, suggesting that directed graphs (circuits) are more difficult to evolve than undirected graphs (molecules), although necessary differences in the crossover operator may also explain the results. In principle, JavaGenes should be able to evolve other graph-representable systems, such as transportation networks, metabolic pathways, and computer networks. However, large graphs evolve significantly slower than smaller graphs, presumably because the space-of-all-graphs explodes combinatorially with graph size. Since the representation strongly affects genetic algorithm performance, adding graphs to the evolutionary programmer's bag-of-tricks should be beneficial. Also, since graph evolution operates directly on the phenotype, the genotype-phenotype translation step, common in genetic algorithm work, is eliminated.

  1. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  2. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  3. Apology and forgiveness evolve to resolve failures in cooperative agreements.

    PubMed

    Martinez-Vaquero, Luis A; Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-06-09

    Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas.

  4. Apology and forgiveness evolve to resolve failures in cooperative agreements

    PubMed Central

    Martinez-Vaquero, Luis A.; Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-01-01

    Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas. PMID:26057819

  5. Modelling cell motility and chemotaxis with evolving surface finite elements

    PubMed Central

    Elliott, Charles M.; Stinner, Björn; Venkataraman, Chandrasekhar

    2012-01-01

    We present a mathematical and a computational framework for the modelling of cell motility. The cell membrane is represented by an evolving surface, with the movement of the cell determined by the interaction of various forces that act normal to the surface. We consider external forces such as those that may arise owing to inhomogeneities in the medium and a pressure that constrains the enclosed volume, as well as internal forces that arise from the reaction of the cells' surface to stretching and bending. We also consider a protrusive force associated with a reaction–diffusion system (RDS) posed on the cell membrane, with cell polarization modelled by this surface RDS. The computational method is based on an evolving surface finite-element method. The general method can account for the large deformations that arise in cell motility and allows the simulation of cell migration in three dimensions. We illustrate applications of the proposed modelling framework and numerical method by reporting on numerical simulations of a model for eukaryotic chemotaxis and a model for the persistent movement of keratocytes in two and three space dimensions. Movies of the simulated cells can be obtained from http://homepages.warwick.ac.uk/∼maskae/CV_Warwick/Chemotaxis.html. PMID:22675164

  6. Time-resolved vibrational spectroscopy detects protein-based intermediates in the photosynthetic oxygen-evolving cycle.

    PubMed

    Barry, Bridgette A; Cooper, Ian B; De Riso, Antonio; Brewer, Scott H; Vu, Dung M; Dyer, R Brian

    2006-05-09

    Photosynthetic oxygen production by photosystem II (PSII) is responsible for the maintenance of aerobic life on earth. The production of oxygen occurs at the PSII oxygen-evolving complex (OEC), which contains a tetranuclear manganese (Mn) cluster. Photo-induced electron transfer events in the reaction center lead to the accumulation of oxidizing equivalents on the OEC. Four sequential photooxidation reactions are required for oxygen production. The oxidizing complex cycles among five oxidation states, called the S(n) states, where n refers to the number of oxidizing equivalents stored. Oxygen release occurs during the S(3)-to-S(0) transition from an unstable intermediate, known as the S(4) state. In this report, we present data providing evidence for the production of an intermediate during each S state transition. These protein-derived intermediates are produced on the microsecond to millisecond time scale and are detected by time-resolved vibrational spectroscopy on the microsecond time scale. Our results suggest that a protein-derived conformational change or proton transfer reaction precedes Mn redox reactions during the S(2)-to-S(3) and S(3)-to-S(0) transitions.

  7. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    NASA Astrophysics Data System (ADS)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  8. Emergence of bursts and communities in evolving weighted networks.

    PubMed

    Jo, Hang-Hyun; Pan, Raj Kumar; Kaski, Kimmo

    2011-01-01

    Understanding the patterns of human dynamics and social interaction and the way they lead to the formation of an organized and functional society are important issues especially for techno-social development. Addressing these issues of social networks has recently become possible through large scale data analysis of mobile phone call records, which has revealed the existence of modular or community structure with many links between nodes of the same community and relatively few links between nodes of different communities. The weights of links, e.g., the number of calls between two users, and the network topology are found correlated such that intra-community links are stronger compared to the weak inter-community links. This feature is known as Granovetter's "The strength of weak ties" hypothesis. In addition to this inhomogeneous community structure, the temporal patterns of human dynamics turn out to be inhomogeneous or bursty, characterized by the heavy tailed distribution of time interval between two consecutive events, i.e., inter-event time. In this paper, we study how the community structure and the bursty dynamics emerge together in a simple evolving weighted network model. The principal mechanisms behind these patterns are social interaction by cyclic closure, i.e., links to friends of friends and the focal closure, links to individuals sharing similar attributes or interests, and human dynamics by task handling process. These three mechanisms have been implemented as a network model with local attachment, global attachment, and priority-based queuing processes. By comprehensive numerical simulations we show that the interplay of these mechanisms leads to the emergence of heavy tailed inter-event time distribution and the evolution of Granovetter-type community structure. Moreover, the numerical results are found to be in qualitative agreement with empirical analysis results from mobile phone call dataset.

  9. A History of Computer Numerical Control.

    ERIC Educational Resources Information Center

    Haggen, Gilbert L.

    Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…

  10. Categorization of First-Year University Students' Interpretations of Numerical Linear Distance-Time Graphs

    ERIC Educational Resources Information Center

    Wemyss, Thomas; van Kampen, Paul

    2013-01-01

    We have investigated the various approaches taken by first-year university students (n[image omitted]550) when asked to determine the direction of motion, the constancy of speed, and a numerical value of the speed of an object at a point on a numerical linear distance-time graph. We investigated the prevalence of various well-known general…

  11. The General Evolving Model for Energy Supply-Demand Network with Local-World

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Han, Dun; Li, Dandan; Fang, Cuicui

    2013-10-01

    In this paper, two general bipartite network evolving models for energy supply-demand network with local-world are proposed. The node weight distribution, the "shifting coefficient" and the scaling exponent of two different kinds of nodes are presented by the mean-field theory. The numerical results of the node weight distribution and the edge weight distribution are also investigated. The production's shifted power law (SPL) distribution of coal enterprises and the installed capacity's distribution of power plants in the US are obtained from the empirical analysis. Numerical simulations and empirical results are given to verify the theoretical results.

  12. Numeral size, spacing between targets, and exposure time in discrimination by elderly people using an lcd monitor.

    PubMed

    Huang, Kuo-Chen; Yeh, Po-Chan

    2007-04-01

    The present study investigated the effects of numeral size, spacing between targets, and exposure time on the discrimination performance by elderly and younger people using a liquid crystal display screen. Analysis showed size of numerals significantly affected discrimination, which increased with increasing numeral size. Spacing between targets also had a significant effect on discrimination, i.e., the larger the space between numerals, the better their discrimination. When the spacing between numerals increased to 4 or 5 points, however, discrimination did not increase beyond that for 3-point spacing. Although performance increased with increasing exposure time, the difference in discrimination at an exposure time of 0.8 vs 1.0 sec. was not significant. The accuracy by the elderly group was less than that by younger subjects.

  13. Numerical modeling of wind turbine aerodynamic noise in the time domain.

    PubMed

    Lee, Seunghoon; Lee, Seungmin; Lee, Soogab

    2013-02-01

    Aerodynamic noise from a wind turbine is numerically modeled in the time domain. An analytic trailing edge noise model is used to determine the unsteady pressure on the blade surface. The far-field noise due to the unsteady pressure is calculated using the acoustic analogy theory. By using a strip theory approach, the two-dimensional noise model is applied to rotating wind turbine blades. The numerical results indicate that, although the operating and atmospheric conditions are identical, the acoustical characteristics of wind turbine noise can be quite different with respect to the distance and direction from the wind turbine.

  14. A new evolutionary system for evolving artificial neural networks.

    PubMed

    Yao, X; Liu, Y

    1997-01-01

    This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms.

  15. 3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement

    DOE PAGES

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    2017-03-02

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  16. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  17. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    NASA Technical Reports Server (NTRS)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  18. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  19. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  1. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  2. On the numeric integration of dynamic attitude equations

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Yan, Y.; Grossman, Robert

    1992-01-01

    We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

  3. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  4. Numerical modeling of an enhanced very early time electromagnetic (VETEM) prototype system

    USGS Publications Warehouse

    Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2000-01-01

    In this paper, two numerical models are presented to simulate an enhanced very early time electromagnetic (VETEM) prototype system, which is used for buried-object detection and environmental problems. Usually, the VETEM system contains a transmitting loop antenna and a receiving loop antenna, which run on a lossy ground to detect buried objects. In the first numerical model, the loop antennas are accurately analyzed using the Method of Moments (MoM) for wire antennas above or buried in lossy ground. Then, Conjugate Gradient (CG) methods, with the use of the fast Fourier transform (FFT) or MoM, are applied to investigate the scattering from buried objects. Reflected and scattered magnetic fields are evaluated at the receiving loop to calculate the output electric current. However, the working frequency for the VETEM system is usually low and, hence, two magnetic dipoles are used to replace the transmitter and receiver in the second numerical model. Comparing these two models, the second one is simple, but only valid for low frequency or small loops, while the first modeling is more general. In this paper, all computations are performed in the frequency domain, and the FFT is used to obtain the time-domain responses. Numerical examples show that simulation results from these two models fit very well when the frequency ranges from 10 kHz to 10 MHz, and both results are close to the measured data.

  5. Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.

    PubMed

    Shang, Yilun

    2015-01-01

    Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.

  6. Tracking vortices in superconductors: Extracting singularities from a discretized complex scalar field evolving in time

    DOE PAGES

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...

    2016-02-19

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  7. Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data.

    PubMed

    Zhang, Chao; Zheng, Yu; Ma, Xiuli; Han, Jiawei

    2015-08-01

    Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e ., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assembler. In the first stage, Assembler filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assembler generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assembler is effective, efficient, and scalable.

  8. Surgical timing of treating injured extremities: an evolving concept of urgency.

    PubMed

    Crist, Brett D; Ferguson, Tania; Murtha, Yvonne M; Lee, Mark A

    2013-01-01

    The management of some orthopaedic extremity injuries has changed over the past decade because of changing resource availability and the risks of complications. It is helpful to review the current literature regarding orthopaedic extremity emergencies and urgencies. The effects of the techniques of damage control orthopaedic techniques and the concept of the orthopaedic trauma room have also affected the management of these injuries. The available literature indicates that the remaining true orthopaedic extremity emergencies include compartment syndrome and vascular injuries associated with fractures and dislocations. Orthopaedic urgencies include open fracture management, femoral neck fractures in young patients treated with open reduction and internal fixation, and talus fractures that are open or those with impending skin compromise. Deciding when the definitive management of orthopaedic extremity injuries will occur has evolved as the concept of damage control orthopaedics has become more commonly accepted. Patient survival rates have improved with current resuscitative protocols. Definitive fixation of extremity injuries should be delayed until the patient's physiologic and extremity soft-tissue status allows for appropriate definitive management while minimizing the risks of complications. In patients with semiurgent orthopaedic injuries, the use of an orthopaedic trauma room has led to more efficient care of patients, fewer complications, and better time management for surgeons who perform on-call service for patients with traumatic orthopaedic injuries.

  9. Robustness to Faults Promotes Evolvability: Insights from Evolving Digital Circuits

    PubMed Central

    Nolfi, Stefano

    2016-01-01

    We demonstrate how the need to cope with operational faults enables evolving circuits to find more fit solutions. The analysis of the results obtained in different experimental conditions indicates that, in absence of faults, evolution tends to select circuits that are small and have low phenotypic variability and evolvability. The need to face operation faults, instead, drives evolution toward the selection of larger circuits that are truly robust with respect to genetic variations and that have a greater level of phenotypic variability and evolvability. Overall our results indicate that the need to cope with operation faults leads to the selection of circuits that have a greater probability to generate better circuits as a result of genetic variation with respect to a control condition in which circuits are not subjected to faults. PMID:27409589

  10. Numerical simulation of time delay Interferometry for LISA with one arm dysfunctional

    NASA Astrophysics Data System (ADS)

    Ni, Wei-Tou; Dhurandhar, Sanjeev V.; Nayak, K. Rajesh; Wang, Gang

    In order to attain the requisite sensitivity for LISA, laser frequency noise must be suppressed below the secondary noises such as the optical path noise, acceleration noise etc. In a previous paper(a), we have found an infinite family of second generation analytic solutions of time delay interferometry and estimated the laser noise due to residual time delay semi-analytically from orbit perturbations due to earth. Since other planets and solar-system bodies also perturb the orbits of LISA spacecraft and affect the time delay interferometry, we simulate the time delay numerically in this paper. To conform to the actual LISA planning, we have worked out a set of 10-year optimized mission orbits of LISA spacecraft using CGC3 ephemeris framework(b). Here we use this numerical solution to calculate the residual errors in the second generation solutions upto n 3 of our previous paper, and compare with the semi-analytic error estimate. The accuracy of this calculation is better than 1 m (or 30 ns). (a) S. V. Dhurandhar, K. Rajesh Nayak and J.-Y. Vinet, time delay Interferometry for LISA with one arm dysfunctional (b) W.-T. Ni and G. Wang, Orbit optimization for 10-year LISA mission orbit starting at 21 June, 2021 using CGC3 ephemeris framework

  11. Evolving molecular era of childhood medulloblastoma: time to revisit therapy.

    PubMed

    Khatua, Soumen

    2016-01-01

    Currently medulloblastoma is treated with a uniform therapeutic approach based on histopathology and clinico-radiological risk stratification, resulting in unpredictable treatment failure and relapses. Improved understanding of the biological, molecular and genetic make-up of these tumors now clearly identifies it as a compendium of four distinct subtypes (WNT, SHH, group 3 and 4). Advances in utilization of the genomic and epigenomic machinery have now delineated genetic aberrations and epigenetic perturbations in each subgroup as potential druggable targets. This has resulted in endeavors to profile targeted therapy. The challenge and future of medulloblastoma therapeutics will be to keep pace with the evolving novel biological insights and translating them into optimal targeted treatment regimens.

  12. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    NASA Astrophysics Data System (ADS)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  13. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  14. Evolving a Behavioral Repertoire for a Walking Robot.

    PubMed

    Cully, A; Mouret, J-B

    2016-01-01

    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, most of these algorithms are devised to learn walking in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which combines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of controllers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution introduced a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.

  15. Exploring the evolutionary mechanism of complex supply chain systems using evolving hypergraphs

    NASA Astrophysics Data System (ADS)

    Suo, Qi; Guo, Jin-Li; Sun, Shiwei; Liu, Han

    2018-01-01

    A new evolutionary model is proposed to describe the characteristics and evolution pattern of supply chain systems using evolving hypergraphs, in which nodes represent enterprise entities while hyperedges represent the relationships among diverse trades. The nodes arrive at the system in accordance with a Poisson process, with the evolving process incorporating the addition of new nodes, linking of old nodes, and rewiring of links. Grounded in the Poisson process theory and continuum theory, the stationary average hyperdegree distribution is shown to follow a shifted power law (SPL), and the theoretical predictions are consistent with the results of numerical simulations. Testing the impact of parameters on the model yields a positive correlation between hyperdegree and degree. The model also uncovers macro characteristics of the relationships among enterprises due to the microscopic interactions among individuals.

  16. Social networks: Evolving graphs with memory dependent edges

    NASA Astrophysics Data System (ADS)

    Grindrod, Peter; Parsons, Mark

    2011-10-01

    The plethora of digital communication technologies, and their mass take up, has resulted in a wealth of interest in social network data collection and analysis in recent years. Within many such networks the interactions are transient: thus those networks evolve over time. In this paper we introduce a class of models for such networks using evolving graphs with memory dependent edges, which may appear and disappear according to their recent history. We consider time discrete and time continuous variants of the model. We consider the long term asymptotic behaviour as a function of parameters controlling the memory dependence. In particular we show that such networks may continue evolving forever, or else may quench and become static (containing immortal and/or extinct edges). This depends on the existence or otherwise of certain infinite products and series involving age dependent model parameters. We show how to differentiate between the alternatives based on a finite set of observations. To test these ideas we show how model parameters may be calibrated based on limited samples of time dependent data, and we apply these concepts to three real networks: summary data on mobile phone use from a developing region; online social-business network data from China; and disaggregated mobile phone communications data from a reality mining experiment in the US. In each case we show that there is evidence for memory dependent dynamics, such as that embodied within the class of models proposed here.

  17. Probe measurements and numerical model predictions of evolving size distributions in premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Filippo, A.; Sgro, L.A.; Lanzuolo, G.

    2009-09-15

    Particle size distributions (PSDs), measured with a dilution probe and a Differential Mobility Analyzer (DMA), and numerical predictions of these PSDs, based on a model that includes only coagulation or alternatively inception and coagulation, are compared to investigate particle growth processes and possible sampling artifacts in the post-flame region of a C/O = 0.65 premixed laminar ethylene-air flame. Inputs to the numerical model are the PSD measured early in the flame (the initial condition for the aerosol population) and the temperature profile measured along the flame's axial centerline. The measured PSDs are initially unimodal, with a modal mobility diameter ofmore » 2.2 nm, and become bimodal later in the post-flame region. The smaller mode is best predicted with a size-dependent coagulation model, which allows some fraction of the smallest particles to escape collisions without resulting in coalescence or coagulation through the size-dependent coagulation efficiency ({gamma}{sub SD}). Instead, when {gamma} = 1 and the coagulation rate is equal to the collision rate for all particles regardless of their size, the coagulation model significantly under predicts the number concentration of both modes and over predicts the size of the largest particles in the distribution compared to the measured size distributions at various heights above the burner. The coagulation ({gamma}{sub SD}) model alone is unable to reproduce well the larger particle mode (mode II). Combining persistent nucleation with size-dependent coagulation brings the predicted PSDs to within experimental error of the measurements, which seems to suggest that surface growth processes are relatively insignificant in these flames. Shifting measured PSDs a few mm closer to the burner surface, generally adopted to correct for probe perturbations, does not produce a better matching between the experimental and the numerical results. (author)« less

  18. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  19. Numerical methods for large-scale, time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1979-01-01

    A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.

  20. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrie, Michael; Shadwick, B. A.

    2016-01-04

    Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  1. Active printed materials for complex self-evolving deformations.

    PubMed

    Raviv, Dan; Zhao, Wei; McKnelly, Carrie; Papadopoulou, Athina; Kadambi, Achuta; Shi, Boxin; Hirsch, Shai; Dikovsky, Daniel; Zyracki, Michael; Olguin, Carlos; Raskar, Ramesh; Tibbits, Skylar

    2014-12-18

    We propose a new design of complex self-evolving structures that vary over time due to environmental interaction. In conventional 3D printing systems, materials are meant to be stable rather than active and fabricated models are designed and printed as static objects. Here, we introduce a novel approach for simulating and fabricating self-evolving structures that transform into a predetermined shape, changing property and function after fabrication. The new locally coordinated bending primitives combine into a single system, allowing for a global deformation which can stretch, fold and bend given environmental stimulus.

  2. Active Printed Materials for Complex Self-Evolving Deformations

    PubMed Central

    Raviv, Dan; Zhao, Wei; McKnelly, Carrie; Papadopoulou, Athina; Kadambi, Achuta; Shi, Boxin; Hirsch, Shai; Dikovsky, Daniel; Zyracki, Michael; Olguin, Carlos; Raskar, Ramesh; Tibbits, Skylar

    2014-01-01

    We propose a new design of complex self-evolving structures that vary over time due to environmental interaction. In conventional 3D printing systems, materials are meant to be stable rather than active and fabricated models are designed and printed as static objects. Here, we introduce a novel approach for simulating and fabricating self-evolving structures that transform into a predetermined shape, changing property and function after fabrication. The new locally coordinated bending primitives combine into a single system, allowing for a global deformation which can stretch, fold and bend given environmental stimulus. PMID:25522053

  3. Direct numerical simulation of transition and turbulence in a spatially evolving boundary layer

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Moin, Parviz

    1991-01-01

    A high-order-accurate finite-difference approach to direct simulations of transition and turbulence in compressible flows is described. Attention is given to the high-free-stream disturbance case in which transition to turbulence occurs close to the leading edge. In effect, computation requirements are reduced. A method for numerically generating free-stream disturbances is presented.

  4. Netgram: Visualizing Communities in Evolving Networks

    PubMed Central

    Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.

    2015-01-01

    Real-world complex networks are dynamic in nature and change over time. The change is usually observed in the interactions within the network over time. Complex networks exhibit community like structures. A key feature of the dynamics of complex networks is the evolution of communities over time. Several methods have been proposed to detect and track the evolution of these groups over time. However, there is no generic tool which visualizes all the aspects of group evolution in dynamic networks including birth, death, splitting, merging, expansion, shrinkage and continuation of groups. In this paper, we propose Netgram: a tool for visualizing evolution of communities in time-evolving graphs. Netgram maintains evolution of communities over 2 consecutive time-stamps in tables which are used to create a query database using the sql outer-join operation. It uses a line-based visualization technique which adheres to certain design principles and aesthetic guidelines. Netgram uses a greedy solution to order the initial community information provided by the evolutionary clustering technique such that we have fewer line cross-overs in the visualization. This makes it easier to track the progress of individual communities in time evolving graphs. Netgram is a generic toolkit which can be used with any evolutionary community detection algorithm as illustrated in our experiments. We use Netgram for visualization of topic evolution in the NIPS conference over a period of 11 years and observe the emergence and merging of several disciplines in the field of information processing systems. PMID:26356538

  5. Visualizing Time-Varying Phenomena In Numerical Simulations Of Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Lane, David A.

    1996-01-01

    Streamlines, contour lines, vector plots, and volume slices (cutting planes) are commonly used for flow visualization. These techniques are sometimes referred to as instantaneous flow visualization techniques because calculations are based on an instant of the flowfield in time. Although instantaneous flow visualization techniques are effective for depicting phenomena in steady flows,they sometimes do not adequately depict time-varying phenomena in unsteady flows. Streaklines and timelines are effective visualization techniques for depicting vortex shedding, vortex breakdown, and shock waves in unsteady flows. These techniques are examples of time-dependent flow visualization techniques, which are based on many instants of the flowfields in time. This paper describes the algorithms for computing streaklines and timelines. Using numerically simulated unsteady flows, streaklines and timelines are compared with streamlines, contour lines, and vector plots. It is shown that streaklines and timelines reveal vortex shedding and vortex breakdown more clearly than instantaneous flow visualization techniques.

  6. The Dynamical Classification of Centaurs which Evolve into Comets

    NASA Astrophysics Data System (ADS)

    Wood, Jeremy R.; Horner, Jonathan; Hinse, Tobias; Marsden, Stephen; Swinburne University of Technology

    2016-10-01

    Centaurs are small Solar system bodies with semi-major axes between Jupiter and Neptune and perihelia beyond Jupiter. Centaurs can be further subclassified into two dynamical categories - random walk and resonance hopping. Random walk Centaurs have mean square semi-major axes (< a2 >) which vary in time according to a generalized diffusion equation where < a2 > ~t2H. H is the Hurst exponent with 0 < H < 1, and t is time. The behavior of < a2 > for resonance hopping Centaurs is not well described by generalized diffusion.The aim of this study is to determine which dynamical type of Centaur is most likely to evolve into each class of comet. 31,722 fictional massless test particles were integrated for 3 Myr in the 6-body problem (Sun, Jovian planets, test particle). Initially each test particle was a member of one of four groups. The semi-major axes of all test particles in a group were clustered within 0.27 au from a first order, interior Mean Motion resonance of Neptune. The resonances were centered at 18.94 au, 22.95 au, 24.82 au and 28.37 au.If the perihelion of a test particle reached < 4 au then the test particle was considered to be a comet and classified as either a random walk or resonance hopping Centaur. The results showed that over 4,000 test particles evolved into comets within 3 Myr. 59% of these test particles were random walk and 41% were resonance hopping. The behavior of the semi-major axis in time was usually well described by generalized diffusion for random walk Centaurs (ravg = 0.98) and poorly described for resonance hopping Centaurs (ravg = 0.52). The average Hurst exponent was 0.48 for random walk Centaurs and 0.20 for resonance hopping Centaurs. Random walk Centaurs were more likely to evolve into short period comets while resonance hopping Centaurs were more likely to evolve into long period comets. For each initial cluster, resonance hopping Centaurs took longer to evolve into comets than random walk Centaurs. Overall the population of

  7. The development of efficient numerical time-domain modeling methods for geophysical wave propagation

    NASA Astrophysics Data System (ADS)

    Zhu, Lieyuan

    This Ph.D. dissertation focuses on the numerical simulation of geophysical wave propagation in the time domain including elastic waves in solid media, the acoustic waves in fluid media, and the electromagnetic waves in dielectric media. This thesis shows that a linear system model can describe accurately the physical processes of those geophysical waves' propagation and can be used as a sound basis for modeling geophysical wave propagation phenomena. The generalized stability condition for numerical modeling of wave propagation is therefore discussed in the context of linear system theory. The efficiency of a series of different numerical algorithms in the time-domain for modeling geophysical wave propagation are discussed and compared. These algorithms include the finite-difference time-domain method, pseudospectral time domain method, alternating directional implicit (ADI) finite-difference time domain method. The advantages and disadvantages of these numerical methods are discussed and the specific stability condition for each modeling scheme is carefully derived in the context of the linear system theory. Based on the review and discussion of these existing approaches, the split step, ADI pseudospectral time domain (SS-ADI-PSTD) method is developed and tested for several cases. Moreover, the state-of-the-art stretched-coordinate perfect matched layer (SCPML) has also been implemented in SS-ADI-PSTD algorithm as the absorbing boundary condition for truncating the computational domain and absorbing the artificial reflection from the domain boundaries. After algorithmic development, a few case studies serve as the real-world examples to verify the capacities of the numerical algorithms and understand the capabilities and limitations of geophysical methods for detection of subsurface contamination. The first case is a study using ground penetrating radar (GPR) amplitude variation with offset (AVO) for subsurface non-aqueous-liquid (NAPL) contamination. The

  8. A fast-evolving luminous transient discovered by K2/Kepler

    NASA Astrophysics Data System (ADS)

    Rest, A.; Garnavich, P. M.; Khatami, D.; Kasen, D.; Tucker, B. E.; Shaya, E. J.; Olling, R. P.; Mushotzky, R.; Zenteno, A.; Margheim, S.; Strampelli, G.; James, D.; Smith, R. C.; Förster, F.; Villar, V. A.

    2018-04-01

    For decades, optical time-domain searches have been tuned to find ordinary supernovae, which rise and fall in brightness over a period of weeks. Recently, supernova searches have improved their cadences and a handful of fast-evolving luminous transients have been identified1-5. These have peak luminosities comparable to type Ia supernovae, but rise to maximum in less than ten days and fade from view in less than one month. Here we present the most extreme example of this class of object thus far: KSN 2015K, with a rise time of only 2.2 days and a time above half-maximum of only 6.8 days. We show that, unlike type Ia supernovae, the light curve of KSN 2015K was not powered by the decay of radioactive elements. We further argue that it is unlikely that it was powered by continuing energy deposition from a central remnant (a magnetar or black hole). Using numerical radiation hydrodynamical models, we show that the light curve of KSN 2015K is well fitted by a model where the supernova runs into external material presumably expelled in a pre-supernova mass-loss episode. The rapid rise of KSN 2015K therefore probes the venting of photons when a hypersonic shock wave breaks out of a dense extended medium.

  9. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  10. Direct Numerical Simulation of a Temporally Evolving Incompressible Plane Wake: Effect of Initial Conditions on Evolution and Topology

    NASA Technical Reports Server (NTRS)

    Sondergaard, R.; Cantwell, B.; Mansour, N.

    1997-01-01

    Direct numerical simulations have been used to examine the effect of the initial disturbance field on the development of three-dimensionality and the transition to turbulence in the incompressible plane wake. The simulations were performed using a new numerical method for solving the time-dependent, three-dimensional, incompressible Navier-Stokes equations in flows with one infinite and two periodic directions. The method uses standard Fast Fourier Transforms and is applicable to cases where the vorticity field is compact in the infinite direction. Initial disturbances fields examined were combinations of two-dimensional waves and symmetric pairs of 60 deg oblique waves at the fundamental, subharmonic, and sub-subharmonic wavelengths. The results of these simulations indicate that the presence of 60 deg disturbances at the subharmonic streamwise wavelength results in the development of strong coherent three-dimensional structures. The resulting strong three-dimensional rate-of-strain triggers the growth of intense fine scale motions. Wakes initiated with 60 deg disturbances at the fundamental streamwise wavelength develop weak coherent streamwise structures, and do not develop significant fine scale motions, even at high Reynolds numbers. The wakes which develop strong three-dimensional structures exhibit growth rates on par with experimentally observed turbulent plane wakes. Wakes which develop only weak three-dimensional structures exhibit significantly lower late time growth rates. Preliminary studies of wakes initiated with an oblique fundamental and a two-dimensional subharmonic, which develop asymmetric coherent oblique structures at the subharmonic wavelength, indicate that significant fine scale motions only develop if the resulting oblique structures are above an angle of approximately 45 deg.

  11. Numerical Simulation of Black Holes

    NASA Astrophysics Data System (ADS)

    Teukolsky, Saul

    2003-04-01

    Einstein's equations of general relativity are prime candidates for numerical solution on supercomputers. There is some urgency in being able to carry out such simulations: Large-scale gravitational wave detectors are now coming on line, and the most important expected signals cannot be predicted except numerically. Problems involving black holes are perhaps the most interesting, yet also particularly challenging computationally. One difficulty is that inside a black hole there is a physical singularity that cannot be part of the computational domain. A second difficulty is the disparity in length scales between the size of the black hole and the wavelength of the gravitational radiation emitted. A third difficulty is that all existing methods of evolving black holes in three spatial dimensions are plagued by instabilities that prohibit long-term evolution. I will describe the ideas that are being introduced in numerical relativity to deal with these problems, and discuss the results of recent calculations of black hole collisions.

  12. On the time to steady state: insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Goren, L.; Willett, S.; McCoy, S. W.; Perron, J.

    2013-12-01

    How fast do fluvial landscapes approach steady state after an application of tectonic or climatic perturbation? While theory and some numerical models predict that the celerity of the advective wave (knickpoint) controls the response time for perturbations, experiments and other landscape evolution models demonstrate that the time to steady state is much longer than the theoretically predicted response time. We posit that the longevity of transient features and the time to steady state are controlled by the stability of the topology and geometry of channel networks. Evolution of a channel network occurs by a combination of discrete capture events and continuous migration of water divides, processes, which are difficult to represent accurately in landscape evolution models. We therefore address the question of the time to steady state using the DAC landscape evolution model that solves accurately for the location of water divides, using a combination of analytical solution for hillslopes and low-order channels together with a numerical solution for higher order channels. DAC also includes an explicit capture criterion. We have tested fundamental predictions from DAC and show that modeled networks reproduce natural network characteristics such as the Hack's exponent and coefficient and the fractal dimension. We define two steady-state criteria: a topographic steady state, defined by global, pointwise steady elevation, and a topological steady state defined as the state in which no further reorganization of the drainage network takes place. Analyzing block uplift simulations, we find that the time to achieve either topographic or topological steady state exceeds by an order of magnitude the theoretical response time of the fluvial network. The longevity of the transient state is the result of the area feedback, by which, migration of a divide changes the local contributing area. This change propagates downstream as a slope adjustment, forcing further divide migrations

  13. Numerical Solution of Time-Dependent Problems with a Fractional-Power Elliptic Operator

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2018-03-01

    A time-dependent problem in a bounded domain for a fractional diffusion equation is considered. The first-order evolution equation involves a fractional-power second-order elliptic operator with Robin boundary conditions. A finite-element spatial approximation with an additive approximation of the operator of the problem is used. The time approximation is based on a vector scheme. The transition to a new time level is ensured by solving a sequence of standard elliptic boundary value problems. Numerical results obtained for a two-dimensional model problem are presented.

  14. Vortex locking in direct numerical simulations of quantum turbulence.

    PubMed

    Morris, Karla; Koplik, Joel; Rouson, Damian W I

    2008-07-04

    Direct numerical simulations are used to examine the locking of quantized superfluid vortices and normal fluid vorticity in evolving turbulent flows. The superfluid is driven by the normal fluid, which undergoes either a decaying Taylor-Green flow or a linearly forced homogeneous isotropic turbulent flow, although the back reaction of the superfluid on the normal fluid flow is omitted. Using correlation functions and wavelet transforms, we present numerical and visual evidence for vortex locking on length scales above the intervortex spacing.

  15. Evolving virtual creatures and catapults.

    PubMed

    Chaumont, Nicolas; Egli, Richard; Adami, Christoph

    2007-01-01

    We present a system that can evolve the morphology and the controller of virtual walking and block-throwing creatures (catapults) using a genetic algorithm. The system is based on Sims' work, implemented as a flexible platform with an off-the-shelf dynamics engine. Experiments aimed at evolving Sims-type walkers resulted in the emergence of various realistic gaits while using fairly simple objective functions. Due to the flexibility of the system, drastically different morphologies and functions evolved with only minor modifications to the system and objective function. For example, various throwing techniques evolved when selecting for catapults that propel a block as far as possible. Among the strategies and morphologies evolved, we find the drop-kick strategy, as well as the systematic invention of the principle behind the wheel, when allowing mutations to the projectile.

  16. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  17. Use of simulated satellite radiances from a mesoscale numerical model to understand kinematic and dynamic processes

    NASA Technical Reports Server (NTRS)

    Kalb, Michael; Robertson, Franklin; Jedlovec, Gary; Perkey, Donald

    1987-01-01

    Techniques by which mesoscale numerical weather prediction model output and radiative transfer codes are combined to simulate the radiance fields that a given passive temperature/moisture satellite sensor would see if viewing the evolving model atmosphere are introduced. The goals are to diagnose the dynamical atmospheric processes responsible for recurring patterns in observed satellite radiance fields, and to develop techniques to anticipate the ability of satellite sensor systems to depict atmospheric structures and provide information useful for numerical weather prediction (NWP). The concept of linking radiative transfer and dynamical NWP codes is demonstrated with time sequences of simulated radiance imagery in the 24 TIROS vertical sounder channels derived from model integrations for March 6, 1982.

  18. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable

  19. Hybridization Reveals the Evolving Genomic Architecture of Speciation

    PubMed Central

    Kronforst, Marcus R.; Hansen, Matthew E.B.; Crawford, Nicholas G.; Gallant, Jason R.; Zhang, Wei; Kulathinal, Rob J.; Kapan, Durrell D.; Mullen, Sean P.

    2014-01-01

    SUMMARY The rate at which genomes diverge during speciation is unknown, as are the physical dynamics of the process. Here, we compare full genome sequences of 32 butterflies, representing five species from a hybridizing Heliconius butterfly community, to examine genome-wide patterns of introgression and infer how divergence evolves during the speciation process. Our analyses reveal that initial divergence is restricted to a small fraction of the genome, largely clustered around known wing-patterning genes. Over time, divergence evolves rapidly, due primarily to the origin of new divergent regions. Furthermore, divergent genomic regions display signatures of both selection and adaptive introgression, demonstrating the link between microevolutionary processes acting within species and the origin of species across macroevolutionary timescales. Our results provide a uniquely comprehensive portrait of the evolving species boundary due to the role that hybridization plays in reducing the background accumulation of divergence at neutral sites. PMID:24183670

  20. Maintaining evolvability.

    PubMed

    Crow, James F

    2008-12-01

    Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy

  1. Evolving Systems and Adaptive Key Component Control

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Balas, Mark J.

    2009-01-01

    We propose a new framework called Evolving Systems to describe the self-assembly, or autonomous assembly, of actively controlled dynamical subsystems into an Evolved System with a higher purpose. An introduction to Evolving Systems and exploration of the essential topics of the control and stability properties of Evolving Systems is provided. This chapter defines a framework for Evolving Systems, develops theory and control solutions for fundamental characteristics of Evolving Systems, and provides illustrative examples of Evolving Systems and their control with adaptive key component controllers.

  2. Transient deformation from daily GPS displacement time series: postseismic deformation, ETS and evolving strain rates

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Fang, P.; Moore, A. W.; Kedar, S.; Liu, Z.; Owen, S. E.; Glasscoe, M. T.

    2016-12-01

    underlying physical mechanisms. (3) We present evolving strain dilatation and shear rates based on the SESES velocities for regional subnetworks as a metric for assigning earthquake probabilities and detection of possible time-dependent deformation related to underlying physical processes.

  3. Numerical simulation for solution of space-time fractional telegraphs equations with local fractional derivatives via HAFSTM

    NASA Astrophysics Data System (ADS)

    Pandey, Rishi Kumar; Mishra, Hradyesh Kumar

    2017-11-01

    In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.

  4. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving

  5. Numerical simulation of the early-time high altitude electromagnetic pulse

    NASA Astrophysics Data System (ADS)

    Meng, Cui; Chen, Yu-Sheng; Liu, Shun-Kun; Xie, Qin-Chuan; Chen, Xiang-Yue; Gong, Jian-Cheng

    2003-12-01

    In this paper, the finite difference method is used to develop the Fortran software MCHII. The physical process in which the electromagnetic signal is generated by the interaction of nuclear-explosion-induced Compton currents with the geomagnetic field is numerically simulated. The electromagnetic pulse waveforms below the burst point are investigated. The effects of the height of burst, yield and the time-dependence of gamma-rays are calculated by using the MCHII code. The results agree well with those obtained by using the code CHAP.

  6. Robust numerical simulation of porosity evolution in chemical vapor infiltration III: three space dimension

    NASA Astrophysics Data System (ADS)

    Jin, Shi; Wang, Xuelei

    2003-04-01

    Chemical vapor infiltration (CVI) process is an important technology to fabricate ceramic matrix composites (CMC's). In this paper, a three-dimension numerical model is presented to describe pore microstructure evolution during the CVI process. We extend the two-dimension model proposed in [S. Jin, X.L. Wang, T.L. Starr, J. Mater. Res. 14 (1999) 3829; S. Jin. X.L. Wang, T.L. Starr, X.F. Chen, J. Comp. Phys. 162 (2000) 467], where the fiber surface is modeled as an evolving interface, to the three space dimension. The 3D method keeps all the virtue of the 2D model: robust numerical capturing of topological changes of the interface such as the merging, and fast detection of the inaccessible pores. For models in the kinetic limit, where the moving speed of the interface is constant, some numerical examples are presented to show that this three-dimension model will effectively track the change of porosity, close-off time, location and shape of all pores.

  7. Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms.

    PubMed

    Cabral, Joana; Kringelbach, Morten L; Deco, Gustavo

    2017-10-15

    Over the last decade, we have observed a revolution in brain structural and functional Connectomics. On one hand, we have an ever-more detailed characterization of the brain's white matter structural connectome. On the other, we have a repertoire of consistent functional networks that form and dissipate over time during rest. Despite the evident spatial similarities between structural and functional connectivity, understanding how different time-evolving functional networks spontaneously emerge from a single structural network requires analyzing the problem from the perspective of complex network dynamics and dynamical system's theory. In that direction, bottom-up computational models are useful tools to test theoretical scenarios and depict the mechanisms at the genesis of resting-state activity. Here, we provide an overview of the different mechanistic scenarios proposed over the last decade via computational models. Importantly, we highlight the need of incorporating additional model constraints considering the properties observed at finer temporal scales with MEG and the dynamical properties of FC in order to refresh the list of candidate scenarios. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Natural selection promotes antigenic evolvability.

    PubMed

    Graves, Christopher J; Ros, Vera I D; Stevenson, Brian; Sniegowski, Paul D; Brisson, Dustin

    2013-01-01

    The hypothesis that evolvability - the capacity to evolve by natural selection - is itself the object of natural selection is highly intriguing but remains controversial due in large part to a paucity of direct experimental evidence. The antigenic variation mechanisms of microbial pathogens provide an experimentally tractable system to test whether natural selection has favored mechanisms that increase evolvability. Many antigenic variation systems consist of paralogous unexpressed 'cassettes' that recombine into an expression site to rapidly alter the expressed protein. Importantly, the magnitude of antigenic change is a function of the genetic diversity among the unexpressed cassettes. Thus, evidence that selection favors among-cassette diversity is direct evidence that natural selection promotes antigenic evolvability. We used the Lyme disease bacterium, Borrelia burgdorferi, as a model to test the prediction that natural selection favors amino acid diversity among unexpressed vls cassettes and thereby promotes evolvability in a primary surface antigen, VlsE. The hypothesis that diversity among vls cassettes is favored by natural selection was supported in each B. burgdorferi strain analyzed using both classical (dN/dS ratios) and Bayesian population genetic analyses of genetic sequence data. This hypothesis was also supported by the conservation of highly mutable tandem-repeat structures across B. burgdorferi strains despite a near complete absence of sequence conservation. Diversification among vls cassettes due to natural selection and mutable repeat structures promotes long-term antigenic evolvability of VlsE. These findings provide a direct demonstration that molecular mechanisms that enhance evolvability of surface antigens are an evolutionary adaptation. The molecular evolutionary processes identified here can serve as a model for the evolution of antigenic evolvability in many pathogens which utilize similar strategies to establish chronic infections.

  9. Natural Selection Promotes Antigenic Evolvability

    PubMed Central

    Graves, Christopher J.; Ros, Vera I. D.; Stevenson, Brian; Sniegowski, Paul D.; Brisson, Dustin

    2013-01-01

    The hypothesis that evolvability - the capacity to evolve by natural selection - is itself the object of natural selection is highly intriguing but remains controversial due in large part to a paucity of direct experimental evidence. The antigenic variation mechanisms of microbial pathogens provide an experimentally tractable system to test whether natural selection has favored mechanisms that increase evolvability. Many antigenic variation systems consist of paralogous unexpressed ‘cassettes’ that recombine into an expression site to rapidly alter the expressed protein. Importantly, the magnitude of antigenic change is a function of the genetic diversity among the unexpressed cassettes. Thus, evidence that selection favors among-cassette diversity is direct evidence that natural selection promotes antigenic evolvability. We used the Lyme disease bacterium, Borrelia burgdorferi, as a model to test the prediction that natural selection favors amino acid diversity among unexpressed vls cassettes and thereby promotes evolvability in a primary surface antigen, VlsE. The hypothesis that diversity among vls cassettes is favored by natural selection was supported in each B. burgdorferi strain analyzed using both classical (dN/dS ratios) and Bayesian population genetic analyses of genetic sequence data. This hypothesis was also supported by the conservation of highly mutable tandem-repeat structures across B. burgdorferi strains despite a near complete absence of sequence conservation. Diversification among vls cassettes due to natural selection and mutable repeat structures promotes long-term antigenic evolvability of VlsE. These findings provide a direct demonstration that molecular mechanisms that enhance evolvability of surface antigens are an evolutionary adaptation. The molecular evolutionary processes identified here can serve as a model for the evolution of antigenic evolvability in many pathogens which utilize similar strategies to establish chronic infections

  10. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  11. Exact numerical calculation of fixation probability and time on graphs.

    PubMed

    Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt

    2016-12-01

    The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. A fully covariant mean-field dynamo closure for numerical 3 + 1 resistive GRMHD

    NASA Astrophysics Data System (ADS)

    Bucciantini, N.; Del Zanna, L.

    2013-01-01

    The powerful high-energy phenomena typically encountered in astrophysics invariably involve physical engines, like neutron stars and black hole accretion discs, characterized by a combination of highly magnetized plasmas, strong gravitational fields and relativistic motions. In recent years, numerical schemes for general relativistic magnetohydrodynamics (GRMHD) have been developed to model the multidimensional dynamics of such systems, including the possibility of evolving space-time. Such schemes have been also extended beyond the ideal limit including the effects of resistivity, in an attempt to model dissipative physical processes acting on small scales (subgrid effects) over the global dynamics. Along the same lines, the magnetic field could be amplified by the presence of turbulent dynamo processes, as often invoked to explain the high values of magnetization required in accretion discs and neutron stars. Here we present, for the first time, a further extension to include the possibility of a mean-field dynamo action within the framework of numerical 3 + 1 (resistive) GRMHD. A fully covariant dynamo closure is proposed, in analogy with the classical theory, assuming a simple α-effect in the comoving frame. Its implementation into a finite-difference scheme for GRMHD in dynamical space-times (the x-echo code by Bucciantini & Del Zanna) is described, and a set of numerical test is presented and compared with analytical solutions wherever possible.

  13. Finite-time mixed outer synchronization of complex networks with coupling time-varying delay.

    PubMed

    He, Ping; Ma, Shu-Hua; Fan, Tao

    2012-12-01

    This article is concerned with the problem of finite-time mixed outer synchronization (FMOS) of complex networks with coupling time-varying delay. FMOS is a recently developed generalized synchronization concept, i.e., in which different state variables of the corresponding nodes can evolve into finite-time complete synchronization, finite-time anti-synchronization, and even amplitude finite-time death simultaneously for an appropriate choice of the controller gain matrix. Some novel stability criteria for the synchronization between drive and response complex networks with coupling time-varying delay are derived using the Lyapunov stability theory and linear matrix inequalities. And a simple linear state feedback synchronization controller is designed as a result. Numerical simulations for two coupled networks of modified Chua's circuits are then provided to demonstrate the effectiveness and feasibility of the proposed complex networks control and synchronization schemes and then compared with the proposed results and the previous schemes for accuracy.

  14. Numerical study on response time of a parallel plate capacitive polyimide humidity sensor based on microhole upper electrode

    NASA Astrophysics Data System (ADS)

    Zhou, Wenhe; He, Xuan; Wu, Jianyun; Wang, Liangbi; Wang, Liangcheng

    2017-07-01

    The parallel plate capacitive humidity sensor based on the grid upper electrode is considered to be a promising one in some fields which require a humidity sensor with better dynamic characteristics. To strengthen the structure and balance the electric charge of the grid upper electrode, a strip is needed. However, it is the strip that keeps the dynamic characteristics of the sensor from being further improved. The numerical method is time- and cost-saving, but the numerical study on the response time of the sensor is just of bits and pieces. The numerical models presented by these studies did not consider the porosity effect of the polymer film on the dynamic characteristics. To overcome the defect of the grid upper electrode, a new structure of the upper electrode is provided by this paper first, and then a model considering the porosity effects of the polymer film on the dynamic characteristics is presented and validated. Finally, with the help of software FLUENT, parameter effects on the response time of the humidity sensor based on the microhole upper electrode are studied by the numerical method. The numerical results show that the response time of the microhole upper electrode sensor is 86% better than that of the grid upper electrode sensor, the response time of humidity sensor can be improved by reducing the hole spacing, increasing the aperture, reducing film thickness, and reasonably enlarging the porosity of the film.

  15. Scale-Limited Lagrange Stability and Finite-Time Synchronization for Memristive Recurrent Neural Networks on Time Scales.

    PubMed

    Xiao, Qiang; Zeng, Zhigang

    2017-10-01

    The existed results of Lagrange stability and finite-time synchronization for memristive recurrent neural networks (MRNNs) are scale-free on time evolvement, and some restrictions appear naturally. In this paper, two novel scale-limited comparison principles are established by means of inequality techniques and induction principle on time scales. Then the results concerning Lagrange stability and global finite-time synchronization of MRNNs on time scales are obtained. Scaled-limited Lagrange stability criteria are derived, in detail, via nonsmooth analysis and theory of time scales. Moreover, novel criteria for achieving the global finite-time synchronization are acquired. In addition, the derived method can also be used to study global finite-time stabilization. The proposed results extend or improve the existed ones in the literatures. Two numerical examples are chosen to show the effectiveness of the obtained results.

  16. A numerical study of the thermal stability of low-lying coronal loops

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Antiochos, S. K.; Mariska, J. T.

    1986-01-01

    The nonlinear evolution of loops that are subjected to a variety of small but finite perturbations was studied. Only the low-lying loops are considered. The analysis was performed numerically using a one-dimensional hydrodynamical model developed at the Naval Research Laboratory. The computer codes solve the time-dependent equations for mass, momentum, and energy transport. The primary interest is the active region filaments, hence a geometry appropriate to those structures was considered. The static solutions were subjected to a moderate sized perturbation and allowed to evolve. The results suggest that both hot and cool loops of the geometry considered are thermally stable against amplitude perturbations of all kinds.

  17. Evaluating time-lapse ERT for monitoring DNAPL remediation via numerical simulation

    NASA Astrophysics Data System (ADS)

    Power, C.; Karaoulis, M.; Gerhard, J.; Tsourlos, P.; Giannopoulos, A.

    2012-12-01

    Dense non-aqueous phase liquids (DNAPLs) remain a challenging geoenvironmental problem in the near subsurface. Numerous thermal, chemical, and biological treatment methods are being applied at sites but without a non-destructive, rapid technique to map the evolution of DNAPL mass in space and time, the degree of remedial success is difficult to quantify. Electrical resistivity tomography (ERT) has long been presented as highly promising in this context but has not yet become a practitioner's tool due to challenges in interpreting the survey results at real sites where the initial condition (DNAPL mass, DNAPL distribution, subsurface heterogeneity) is typically unknown. Recently, a new numerical model was presented that couples DNAPL and ERT simulation at the field scale, providing a tool for optimizing ERT application and interpretation at DNAPL sites (Power et al., 2011, Fall AGU, H31D-1191). The objective of this study is to employ this tool to evaluate the effectiveness of time-lapse ERT to monitor DNAPL source zone remediation, taking advantage of new inversion methodologies that exploit the differences in the target over time. Several three-dimensional releases of chlorinated solvent DNAPLs into heterogeneous clayey sand at the field scale were generated, varying in the depth and complexity of the source zone (target). Over time, dissolution of the DNAPL in groundwater was simulated with simultaneous mapping via periodic ERT surveys. Both surface and borehole ERT surveys were conducted for comparison purposes. The latest four-dimensional ERT inversion algorithms were employed to generate time-lapse isosurfaces of the DNAPL source zone for all cases. This methodology provided a qualitative assessment of the ability of ERT to track DNAPL mass removal for complex source zones in realistically heterogeneous environments. In addition, it provided a quantitative comparison between the actual DNAPL mass removed and that interpreted by ERT as a function of depth below

  18. Investigation of flow-induced numerical instability in a mixed semi-implicit, implicit leapfrog time discretization

    NASA Astrophysics Data System (ADS)

    King, Jacob; Kruger, Scott

    2017-10-01

    Flow can impact the stability and nonlinear evolution of range of instabilities (e.g. RWMs, NTMs, sawteeth, locked modes, PBMs, and high-k turbulence) and thus robust numerical algorithms for simulations with flow are essential. Recent simulations of DIII-D QH-mode [King et al., Phys. Plasmas and Nucl. Fus. 2017] with flow have been restricted to smaller time-step sizes than corresponding computations without flow. These computations use a mixed semi-implicit, implicit leapfrog time discretization as implemented in the NIMROD code [Sovinec et al., JCP 2004]. While prior analysis has shown that this algorithm is unconditionally stable with respect to the effect of large flows on the MHD waves in slab geometry [Sovinec et al., JCP 2010], our present Von Neumann stability analysis shows that a flow-induced numerical instability may arise when ad-hoc cylindrical curvature is included. Computations with the NIMROD code in cylindrical geometry with rigid rotation and without free-energy drive from current or pressure gradients qualitatively confirm this analysis. We explore potential methods to circumvent this flow-induced numerical instability such as using a semi-Lagrangian formulation instead of time-centered implicit advection and/or modification to the semi-implicit operator. This work is supported by the DOE Office of Science (Office of Fusion Energy Sciences).

  19. A time step criterion for the stable numerical simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Juan-Lien Ramirez, Alina; Löhnert, Stefan; Neuweiler, Insa

    2017-04-01

    The process of propagating or widening cracks in rock formations by means of fluid flow, known as hydraulic fracturing, has been gaining attention in the last couple of decades. There is growing interest in its numerical simulation to make predictions. Due to the complexity of the processes taking place, e.g. solid deformation, fluid flow in an open channel, fluid flow in a porous medium and crack propagation, this is a challenging task. Hydraulic fracturing has been numerically simulated for some years now [1] and new methods to take more of its processes into account (increasing accuracy) while modeling in an efficient way (lower computational effort) have been developed in recent years. An example is the use of the Extended Finite Element Method (XFEM), whose application originated within the framework of solid mechanics, but is now seen as an effective method for the simulation of discontinuities with no need for re-meshing [2]. While more focus has been put to the correct coupling of the processes mentioned above, less attention has been paid to the stability of the model. When using a quasi-static approach for the simulation of hydraulic fracturing, choosing an adequate time step is not trivial. This is in particular true if the equations are solved in a staggered way. The difficulty lies within the inconsistency between the static behavior of the solid and the dynamic behavior of the fluid. It has been shown that too small time steps may lead to instabilities early into the simulation time [3]. While the solid reaches a stationary state instantly, the fluid is not able to achieve equilibrium with its new surrounding immediately. This is why a time step criterion has been developed to quantify the instability of the model concerning the time step. The presented results were created with a 2D poroelastic model, using the XFEM for both the solid and the fluid phases. An embedded crack propagates following the energy release rate criteria when the fluid pressure

  20. The Evolvement of Numeracy and Mathematical Literacy Curricula and the Construction of Hierarchies of Numerate or Mathematically Literate Subjects

    ERIC Educational Resources Information Center

    Jablonka, Eva

    2015-01-01

    This contribution briefly sketches the evolvement of numeracy or mathematical literacy as models for mathematics curricula, which will be described as driven by a weakening of the insulation between discourses, that is, as a process of "declassification". The question then arises as to whether and how coherence of new forms of initially…

  1. Large-eddy simulation of a spatially-evolving turbulent mixing layer

    NASA Astrophysics Data System (ADS)

    Capuano, Francesco; Catalano, Pietro; Mastellone, Andrea

    2015-11-01

    Large-eddy simulations of a spatially-evolving turbulent mixing layer have been performed. The flow conditions correspond to those of a documented experimental campaign (Delville, Appl. Sci. Res. 1994). The flow evolves downstream of a splitter plate separating two fully turbulent boundary layers, with Reθ = 2900 on the high-speed side and Reθ = 1200 on the low-speed side. The computational domain starts at the trailing edge of the splitter plate, where experimental mean velocity profiles are prescribed; white-noise perturbations are superimposed to mimic turbulent fluctuations. The fully compressible Navier-Stokes equations are solved by means of a finite-volume method implemented into the in-house code SPARK-LES. The results are mainly checked in terms of the streamwise evolution of the vorticity thickness and averaged velocity profiles. The combined effects of inflow perturbations, numerical accuracy and subgrid-scale model are discussed. It is found that excessive levels of dissipation may damp inlet fluctuations and delay the virtual origin of the turbulent mixing layer. On the other hand, non-dissipative, high-resolution computations provide results that are in much better agreement with experimental data.

  2. An investigation of several numerical procedures for time-asymptotic compressible Navier-Stokes solutions

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.

    1975-01-01

    The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.

  3. Delineating slowly and rapidly evolving fractions of the Drosophila genome.

    PubMed

    Keith, Jonathan M; Adams, Peter; Stephen, Stuart; Mattick, John S

    2008-05-01

    Evolutionary conservation is an important indicator of function and a major component of bioinformatic methods to identify non-protein-coding genes. We present a new Bayesian method for segmenting pairwise alignments of eukaryotic genomes while simultaneously classifying segments into slowly and rapidly evolving fractions. We also describe an information criterion similar to the Akaike Information Criterion (AIC) for determining the number of classes. Working with pairwise alignments enables detection of differences in conservation patterns among closely related species. We analyzed three whole-genome and three partial-genome pairwise alignments among eight Drosophila species. Three distinct classes of conservation level were detected. Sequences comprising the most slowly evolving component were consistent across a range of species pairs, and constituted approximately 62-66% of the D. melanogaster genome. Almost all (>90%) of the aligned protein-coding sequence is in this fraction, suggesting much of it (comprising the majority of the Drosophila genome, including approximately 56% of non-protein-coding sequences) is functional. The size and content of the most rapidly evolving component was species dependent, and varied from 1.6% to 4.8%. This fraction is also enriched for protein-coding sequence (while containing significant amounts of non-protein-coding sequence), suggesting it is under positive selection. We also classified segments according to conservation and GC content simultaneously. This analysis identified numerous sub-classes of those identified on the basis of conservation alone, but was nevertheless consistent with that classification. Software, data, and results available at www.maths.qut.edu.au/-keithj/. Genomic segments comprising the conservation classes available in BED format.

  4. Evolving the machine

    NASA Astrophysics Data System (ADS)

    Bailey, Brent Andrew

    Structural designs by humans and nature are wholly distinct in their approaches. Engineers model components to verify that all mechanical requirements are satisfied before assembling a product. Nature, on the other hand; creates holistically: each part evolves in conjunction with the others. The present work is a synthesis of these two design approaches; namely, spatial models that evolve. Topology optimization determines the amount and distribution of material within a model; which corresponds to the optimal connectedness and shape of a structure. Smooth designs are obtained by using higher-order B-splines in the definition of the material distribution. Higher-fidelity is achieved using adaptive meshing techniques at the interface between solid and void. Nature is an exemplary basis for mass minimization, as processing material requires both resources and energy. Topological optimization techniques were originally formulated as the maximization of the structural stiffness subject to a volume constraint. This research inverts the optimization problem: the mass is minimized subject to deflection constraints. Active materials allow a structure to interact with its environment in a manner similar to muscles and sensory organs in animals. By specifying the material properties and design requirements, adaptive structures with integrated sensors and actuators can evolve.

  5. Dynamical Approach Study of Spurious Steady-State Numerical Solutions of Nonlinear Differential Equations. 2; Global Asymptotic Behavior of Time Discretizations; 2. Global Asymptotic Behavior of time Discretizations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1995-01-01

    The global asymptotic nonlinear behavior of 1 1 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODES) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDES.

  6. Vector Potential Generation for Numerical Relativity Simulations

    NASA Astrophysics Data System (ADS)

    Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian

    2017-01-01

    Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436

  7. A Quantitative Approach to Assessing System Evolvability

    NASA Technical Reports Server (NTRS)

    Christian, John A., III

    2004-01-01

    When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.

  8. Numerical Simulations of Dynamical Mass Transfer in Binaries

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Frank, J.; Tohline, J. E.

    1999-05-01

    We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  9. Adaptive inferential sensors based on evolving fuzzy models.

    PubMed

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the

  10. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  11. Use of the parameterised finite element method to robustly and efficiently evolve the edge of a moving cell.

    PubMed

    Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H

    2010-11-01

    In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.

  12. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  13. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as

  14. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, Mark; Leake, James; Schuck, Peter W.

    2016-05-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solaractivity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of the magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution.We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data.This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  15. Real time control and numerical simulation of pipeline subjected to landslide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuscuna, S.; Giusti, G.; Gramola, C.

    1984-06-01

    This paper describes SNAM research activity in the study of behaviour and real-time control of pipelines in landslide areas. The subject can be delt considering three different aspects: 1. Geotechnical characterization of unstable soils. The mechanical parameters of soil and the landslide types are defined; 2. Structural analysis of pipe-soil system. By means of a finite element program it's possible to study the pipe-soil interaction; in this numerical code the soil parameters attend by the non-linear elastic behaviour of pipe restraints. The results of this analysis are the location of the expected most stressed sections of pipe and the globalmore » behaviour of pipe inside the soil. 3. Instrumental control. The adoption of a suitable appliance of vibrating wire strain gauges allows the strain control of pipe in time. The aim is to make possible timely interventions in order to guarantee the installation safety.« less

  16. Marketing Time: Evolving Timescapes in Aacademia

    ERIC Educational Resources Information Center

    Guzmán-Valenzuela, Carolina; Barnett, Ronald

    2013-01-01

    In countries such as Chile in which a neoliberal economic approach is predominant, higher education systems are characterized by productivity, competition for resources and income generation, all of which have impact on academics' experiences of time. Through a qualitative approach in which 20 interviews and two focus groups were conducted, this…

  17. Integrating the Human Sciences to Evolve Effective Policies

    PubMed Central

    Biglan, Anthony; Cody, Christine

    2012-01-01

    This paper describes an evolutionary perspective on human development and wellbeing and contrasts it with the model of self-interest that is prominent in economics. The two approaches have considerably different implications for how human wellbeing might be improved. Research in psychology, prevention science, and neuroscience is converging on an evolutionary account of the importance of two contrasting suites of social behavior—prosociality vs. antisocial behaviors (crime, drug abuse, risky sexual behavior) and related problems such as depression. Prosociality of individuals and groups evolves in environments that minimize toxic biological and social conditions, promote and richly reinforce prosocial behavior and attitudes, limit opportunities for antisocial behavior, and nurture the pursuit of prosocial values. Conversely, antisocial behavior and related problems emerge in environments that are high in threat and conflict. Over the past 30 years, randomized trials have shown numerous family, school, and community interventions to prevent most problem behaviors and promote prosociality. Research has also shown that poverty and economic inequality are major risk factors for the development of problem behaviors. The paper describes policies that can reduce poverty and benefit youth development. Although it is clear that the canonical economic model of rational self-interest has made a significant contribution to the science of economics, the evidence reviewed here shows that it must be reconciled with an evolutionary perspective on human development and wellbeing if society is going to evolve public policies that advance the health and wellbeing of the entire population. PMID:23833332

  18. Combining numerical simulations with time-domain random walk for pathogen risk assessment in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Molin, S.

    2012-02-01

    We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.

  19. Disgust: Evolved Function and Structure

    ERIC Educational Resources Information Center

    Tybur, Joshua M.; Lieberman, Debra; Kurzban, Robert; DeScioli, Peter

    2013-01-01

    Interest in and research on disgust has surged over the past few decades. The field, however, still lacks a coherent theoretical framework for understanding the evolved function or functions of disgust. Here we present such a framework, emphasizing 2 levels of analysis: that of evolved function and that of information processing. Although there is…

  20. Disaggregating soil erosion processes within an evolving experimental landscape

    USDA-ARS?s Scientific Manuscript database

    Soil-mantled landscapes subjected to rainfall, runoff events, and downstream base level adjustments will erode and evolve in time and space. Yet the precise mechanisms for soil erosion also will vary, and such variations may not be adequately captured by soil erosion prediction technology. This st...

  1. Strange mode instabilities and mass loss in evolved massive primordial stars

    NASA Astrophysics Data System (ADS)

    Yadav, Abhay Pratap; Kühnrich Biavatti, Stefan Henrique; Glatzel, Wolfgang

    2018-04-01

    A linear stability analysis of models for evolved primordial stars with masses between 150 and 250 M⊙ is presented. Strange mode instabilities with growth rates in the dynamical range are identified for stellar models with effective temperatures below log Teff = 4.5. For selected models, the final fate of the instabilities is determined by numerical simulation of their evolution into the non-linear regime. As a result, the instabilities lead to finite amplitude pulsations. Associated with them are acoustic energy fluxes capable of driving stellar winds with mass-loss rates in the range between 7.7 × 10-7 and 3.5 × 10-4 M⊙ yr-1.

  2. Time domain numerical calculations of unsteady vortical flows about a flat plate airfoil

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Yu, Ping; Scott, J. R.

    1989-01-01

    A time domain numerical scheme is developed to solve for the unsteady flow about a flat plate airfoil due to imposed upstream, small amplitude, transverse velocity perturbations. The governing equation for the resulting unsteady potential is a homogeneous, constant coefficient, convective wave equation. Accurate solution of the problem requires the development of approximate boundary conditions which correctly model the physics of the unsteady flow in the far field. A uniformly valid far field boundary condition is developed, and numerical results are presented using this condition. The stability of the scheme is discussed, and the stability restriction for the scheme is established as a function of the Mach number. Finally, comparisons are made with the frequency domain calculation by Scott and Atassi, and the relative strengths and weaknesses of each approach are assessed.

  3. Spatio-Temporal Data Model for Integrating Evolving Nation-Level Datasets

    NASA Astrophysics Data System (ADS)

    Sorokine, A.; Stewart, R. N.

    2017-10-01

    Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  4. Adaptive control of dynamical synchronization on evolving networks with noise disturbances

    NASA Astrophysics Data System (ADS)

    Yuan, Wu-Jie; Zhou, Jian-Fang; Sendiña-Nadal, Irene; Boccaletti, Stefano; Wang, Zhen

    2018-02-01

    In real-world networked systems, the underlying structure is often affected by external and internal unforeseen factors, making its evolution typically inaccessible. An adaptive strategy was introduced for maintaining synchronization on unpredictably evolving networks [Sorrentino and Ott, Phys. Rev. Lett. 100, 114101 (2008), 10.1103/PhysRevLett.100.114101], which yet does not consider the noise disturbances widely existing in networks' environments. We provide here strategies to control dynamical synchronization on slowly and unpredictably evolving networks subjected to noise disturbances which are observed at the node and at the communication channel level. With our strategy, the nodes' coupling strength is adaptively adjusted with the aim of controlling synchronization, and according only to their received signal and noise disturbances. We first provide a theoretical analysis of the control scheme by introducing an error potential function to seek for the minimization of the synchronization error. Then, we show numerical experiments which verify our theoretical results. In particular, it is found that our adaptive strategy is effective even for the case in which the dynamics of the uncontrolled network would be explosive (i.e., the states of all the nodes would diverge to infinity).

  5. The Evolving Doorframe.

    ERIC Educational Resources Information Center

    Wiens, Janet

    2000-01-01

    Discusses decision making factors when choosing doorframes for educational facilities. Focus is placed on how doorframes have evolved over the years in ways that offer new choice options to consider. (GR)

  6. Human-computer interfaces applied to numerical solution of the Plateau problem

    NASA Astrophysics Data System (ADS)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  7. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    NASA Astrophysics Data System (ADS)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  8. Numerical study of water residence time in the Yueqing Bay based on the eulerian approach

    NASA Astrophysics Data System (ADS)

    Ying, Chao; Li, Xinwen; Liu, Yong; Yao, Wenwei; Li, Ruijie

    2018-05-01

    The Yueqing Bay was a semi-enclosed bay located in the southeast of Zhejiang Province, China. Due to substantial anthropogenic influences since 1964, the water quality in the bay had deteriorated seriously. Thus urgent measures should be taken to protect the water body. In this study, a numerical model was calibrated for water surface elevation and tidal current from August 14 to August 26, 2011. Comparisons of observed and simulated data showed that the model reproduced the tidal range and phase and the variations of current at different periods fairly well. The calibrated model was then applied to investigate spatial flushing pattern of the bay by calculation of residence time. The results obtained from a series of model experiments demonstrated that the residence time increased from 10 day at the bay mouth to more than 70 day at the upper bay. The average residence time over the whole bay was 49.5 day. In addition, the adaptation of flushing homogeneity curve showed that the residence time in the bay varied smoothly. This study provides a numerical tool to quantify the transport timescale in Yueqing Bay and supports adaptive management of the bay by local authorities.

  9. Numerical modeling of the acoustic wave propagation across a homogenized rigid microstructure in the time domain

    NASA Astrophysics Data System (ADS)

    Lombard, Bruno; Maurel, Agnès; Marigo, Jean-Jacques

    2017-04-01

    Homogenization of a thin micro-structure yields effective jump conditions that incorporate the geometrical features of the scatterers. These jump conditions apply across a thin but nonzero thickness interface whose interior is disregarded. This paper aims (i) to propose a numerical method able to handle the jump conditions in order to simulate the homogenized problem in the time domain, (ii) to inspect the validity of the homogenized problem when compared to the real one. For this purpose, we adapt the Explicit Simplified Interface Method originally developed for standard jump conditions across a zero-thickness interface. Doing so allows us to handle arbitrary-shaped interfaces on a Cartesian grid with the same efficiency and accuracy of the numerical scheme than those obtained in a homogeneous medium. Numerical experiments are performed to test the properties of the numerical method and to inspect the validity of the homogenization problem.

  10. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  11. Hybrid asymptotic-numerical approach for estimating first-passage-time densities of the two-dimensional narrow capture problem.

    PubMed

    Lindsay, A E; Spoonmore, R T; Tzou, J C

    2016-10-01

    A hybrid asymptotic-numerical method is presented for obtaining an asymptotic estimate for the full probability distribution of capture times of a random walker by multiple small traps located inside a bounded two-dimensional domain with a reflecting boundary. As motivation for this study, we calculate the variance in the capture time of a random walker by a single interior trap and determine this quantity to be comparable in magnitude to the mean. This implies that the mean is not necessarily reflective of typical capture times and that the full density must be determined. To solve the underlying diffusion equation, the method of Laplace transforms is used to obtain an elliptic problem of modified Helmholtz type. In the limit of vanishing trap sizes, each trap is represented as a Dirac point source that permits the solution of the transform equation to be represented as a superposition of Helmholtz Green's functions. Using this solution, we construct asymptotic short-time solutions of the first-passage-time density, which captures peaks associated with rapid capture by the absorbing traps. When numerical evaluation of the Helmholtz Green's function is employed followed by numerical inversion of the Laplace transform, the method reproduces the density for larger times. We demonstrate the accuracy of our solution technique with a comparison to statistics obtained from a time-dependent solution of the diffusion equation and discrete particle simulations. In particular, we demonstrate that the method is capable of capturing the multimodal behavior in the capture time density that arises when the traps are strategically arranged. The hybrid method presented can be applied to scenarios involving both arbitrary domains and trap shapes.

  12. Probing Dust Formation Around Evolved Stars with Near-Infrared Interferometry

    NASA Astrophysics Data System (ADS)

    Sargent, B.; Srinivasan, S.; Riebel, D.; Meixner, M.

    2014-09-01

    Near-infrared interferometry holds great promise for advancing our understanding of the formation of dust around evolved stars. For example, the Magdalena Ridge Observatory Interferometer (MROI), which will be an optical/near-infrared interferometer with down to submilliarcsecond resolution, includes studying stellar mass loss as being of interest to its Key Science Mission. With facilities like MROI, many questions relating to the formation of dust around evolved stars may be probed. How close to an evolved star such as an asymptotic giant branch (AGB) or red supergiant (RSG) star does a dust grain form? Over what temperature ranges will such dust form? How does dust formation temperature and distance from star change as a function of the dust composition (carbonaceous versus oxygen-rich)? What are the ranges of evolved star dust shell geometries, and does dust shell geometry for AGB and RSG stars correlate with dust composition, similar to the correlation seen for planetary nebula outflows? At what point does the AGB star become a post-AGB star, when dust formation ends and the dust shell detaches? Currently we are conducting studies of evolved star mass loss in the Large Magellanic Cloud using photometry from the Surveying the Agents of a Galaxy's Evolution (SAGE; PI: M. Meixner) Spitzer Space Telescope Legacy program. We model this mass loss using the radiative transfer program 2Dust to create our Grid of Red supergiant and Asymptotic giant branch ModelS (GRAMS). For simplicity, we assume spherical symmetry, but 2Dust does have the capability to model axisymmetric, non-spherically-symmetric dust shell geometries. 2Dust can also generate images of models at specified wavelengths. We discuss possible connections of our GRAMS modeling using 2Dust of SAGE data of evolved stars in the LMC and also other data on evolved stars in the Milky Way's Galactic Bulge to near-infrared interferometric studies of such stars. By understanding the origins of dust around evolved

  13. Tensions inherent in the evolving role of the infection preventionist.

    PubMed

    Conway, Laurie J; Raveis, Victoria H; Pogorzelska-Maziarz, Monika; Uchida, May; Stone, Patricia W; Larson, Elaine L

    2013-11-01

    The role of infection preventionists (IPs) is expanding in response to demands for quality and transparency in health care. Practice analyses and survey research have demonstrated that IPs spend a majority of their time on surveillance and are increasingly responsible for prevention activities and management; however, deeper qualitative aspects of the IP role have rarely been explored. We conducted a qualitative content analysis of in-depth interviews with 19 IPs at hospitals throughout the United States to describe the current IP role, specifically the ways that IPs effect improvements and the facilitators and barriers they face. The narratives document that the IP role is evolving in response to recent changes in the health care landscape and reveal that this progression is associated with friction and uncertainty. Tensions inherent in the evolving role of the IP emerged from the interviews as 4 broad themes: (1) expanding responsibilities outstrip resources, (2) shifting role boundaries create uncertainty, (3) evolving mechanisms of influence involve trade-offs, and (4) the stress of constant change is compounded by chronic recurring challenges. Advances in implementation science, data standardization, and training in leadership skills are needed to support IPs in their evolving role. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  14. The urban watershed continuum: evolving spatial and temporal dimensions

    Treesearch

    Sujay S. Kaushal; Kenneth T. Belt

    2012-01-01

    Urban ecosystems are constantly evolving, and they are expected to change in both space and time with active management or degradation. An urban watershed continuum framework recognizes a continuum of engineered and natural hydrologic flowpaths that expands hydrologic networks in ways that are seldom considered. It recognizes that the nature of hydrologic connectivity...

  15. A delta-rule model of numerical and non-numerical order processing.

    PubMed

    Verguts, Tom; Van Opstal, Filip

    2014-06-01

    Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE PAGES

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; ...

    2018-02-07

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  17. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  18. A numerical simulation of magnetic reconnection and radiative cooling in line-tied current sheets

    NASA Technical Reports Server (NTRS)

    Forbes, T. G.; Malherbe, J. M.

    1991-01-01

    Radiative MHD equations are used for an optically thin plasma to carry out a numerical experiment related to the formation of 'postflare' loops. The numerical experiment starts with a current sheet that is in mechanical and thermal equilibrium but is unstable to both tearing-mode and thermal-condensation instabilities. The current sheet is line-tied at one end to a photospheric-like boundary and evolves asymmetrically. The effects of thermal conduction, resistivity variation, and gravity are ignored. In general, reconnection in the nonlinear stage of the tearing-mode instability can strongly affect the onset of condensations unless the radiative-cooling time scale is much smaller than the tearing-mode time scale. When the ambient plasma is less than 0.2, the reconnection enters a regime where the outflow from the reconnection region is supermagnetosonic with respect to the fast-mode wave speed. In the supermagnetosonic regime the most rapidly condensing regions occur downstream of a fast-mode shock that forms where the outflow impinges on closed loops attached to the photospheric-like boundary. A similar shock-induced condensation might occur during the formation of 'postflare' loops.

  19. Where did the time go? Friction evolves with slip following large velocity steps, normal stress steps, and (?) during long holds

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bhattacharya, P.; Tullis, T. E.; Okazaki, K.; Beeler, N. M.

    2016-12-01

    The popular constitutive formulations of rate-and-state friction offer two end-member views on whether friction evolves only with slip (Slip law state evolution) or with time even without slip (Aging law state evolution). While rate stepping experiments show support for the Slip law, laboratory observed frictional behavior of initially bare rock surfaces near zero slip rate has traditionally been interpreted to show support for time-dependent evolution of frictional strength. Such laboratory derived support for time-dependent evolution has been one of the motivations behind the Aging law being widely used to model earthquake cycles on natural faults.Through a combination of theoretical results and new experimental data on initially bare granite, we show stronger support for the other end member view, i.e. that friction under a wide range of sliding conditions evolves only with slip. Our dataset is unique in that it combines up to 3.5 orders of magnitude rate steps, sequences of holds up to 10000s, and 5% normal stress steps at order of magnitude different sliding rates during the same experimental run. The experiments were done on the Brown rotary shear apparatus using servo feedback, making the machine stiff enough to provide very large departures from steady-state while maintaining stable, quasi-static sliding. Across these diverse sliding conditions, and in particular for both large velocity step decreases and the longest holds, the data are much more consistent with the Slip law version of slip-dependence than the time-dependence formulated in the Aging law. The shear stress response to normal stress steps is also consistently better explained by the Slip law when paired with the Linker-Dieterich type response to normal stress perturbations. However, the remarkable symmetry and slip-dependence of the normal stress step increases and decreases suggest deficiencies in the Linker-Dieterich formulation that we will probe in future experiments.High quality

  20. On numerical model of time-dependent processes in three-dimensional porous heat-releasing objects

    NASA Astrophysics Data System (ADS)

    Lutsenko, Nickolay A.

    2016-10-01

    The gas flows in the gravity field through porous objects with heat-releasing sources are investigated when the self-regulation of the flow rate of the gas passing through the porous object takes place. Such objects can appear after various natural or man-made disasters (like the exploded unit of the Chernobyl NPP). The mathematical model and the original numerical method, based on a combination of explicit and implicit finite difference schemes, are developed for investigating the time-dependent processes in 3D porous energy-releasing objects. The advantage of the numerical model is its ability to describe unsteady processes under both natural convection and forced filtration. The gas cooling of 3D porous objects with different distribution of heat sources is studied using computational experiment.

  1. Communicability across evolving networks.

    PubMed

    Grindrod, Peter; Parsons, Mark C; Higham, Desmond J; Estrada, Ernesto

    2011-04-01

    Many natural and technological applications generate time-ordered sequences of networks, defined over a fixed set of nodes; for example, time-stamped information about "who phoned who" or "who came into contact with who" arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time's arrow is captured naturally through the noncommutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.

  2. Network Analysis of Earth's Co-Evolving Geosphere and Biosphere

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.; Eleish, A.; Liu, C.; Morrison, S. M.; Meyer, M.; Consortium, K. D.

    2017-12-01

    A fundamental goal of Earth science is the deep understanding of Earth's dynamic, co-evolving geosphere and biosphere through deep time. Network analysis of geo- and bio- `big data' provides an interactive, quantitative, and predictive visualization framework to explore complex and otherwise hidden high-dimension features of diversity, distribution, and change in the evolution of Earth's geochemistry, mineralogy, paleobiology, and biochemistry [1]. Networks also facilitate quantitative comparison of different geological time periods, tectonic settings, and geographical regions, as well as different planets and moons, through network metrics, including density, centralization, diameter, and transitivity.We render networks by employing data related to geographical, paragenetic, environmental, or structural relationships among minerals, fossils, proteins, and microbial taxa. An important recent finding is that the topography of many networks reflects parameters not explicitly incorporated in constructing the network. For example, networks for minerals, fossils, and protein structures reveal embedded qualitative time axes, with additional network geometries possibly related to extinction and/or other punctuation events (see Figure). Other axes related to chemical activities and volatile fugacities, as well as pressure and/or depth of formation, may also emerge from network analysis. These patterns provide new insights into the way planets evolve, especially Earth's co-evolving geosphere and biosphere. 1. Morrison, S.M. et al. (2017) Network analysis of mineralogical systems. American Mineralogist 102, in press. Figure Caption: A network of Phanerozoic Era fossil animals from the past 540 million years includes blue, red, and black circles (nodes) representing family-level taxa and grey lines (links) between coexisting families. Age information was not used in the construction of this network; nevertheless an intrinsic timeline is embedded in the network topology. In

  3. Time Variations of the ENA Flux Observed by IBEX: Is the Outer Heliosphere Evolving?

    NASA Astrophysics Data System (ADS)

    McComas, D. J.; Bzowski, M.; Clark, G.; Crew, G. B.; Demajistre, R.; Funsten, H. O.; Fuselier, S. A.; Gruntman, M.; Janzen, P.; Livadiotis, G.; Moebius, E.; Reisenfeld, D. B.; Roelof, E. C.; Schwadron, N. A.

    2009-12-01

    The Interstellar Boundary Explorer (IBEX) mission has just provided the first global observations of the heliosphere’s interstellar interaction [McComas et al., 2009 and other papers in the IBEX special issue of Science]. IBEX all-sky maps and energy spectra provide detailed information about this interaction. Because of the way IBEX collects its observations, each swath of the sky is revisited every six months, with the winter viewing, when IBEX’s orbit is largely sunward of the Earth, providing significantly cleaner measurements than the summer season, when IBEX’s orbit rotates through the Earth’s magnetosheath and magnetotail. Very limited initial overlapping data showed that the observed structure was largely stable over the first six months of observations, however, it also suggested the tantalizing possibility that there could be some temporal evolution. By the time of the Fall AGU meeting, much of the sky will be imaged a second time. This study will provide a comparison of these sets of observations, especially at higher energies where the statistics are better, and directly address the question of if the outer heliosphere is evolving over timescales on the order of half a year. Reference McComas, D.J., F. Allegrini, P. Bochsler, M. Bzowski, E.R. Christian, G.B. Crew, R. DeMajistre, H. Fahr, H. Fichtner, P.C. Frisch, H.O. Funsten, S. A. Fuselier, G. Gloeckler, M. Gruntman, J. Heerikhuisen, V. Izmodenov, P. Janzen, P. Knappenberger, S. Krimigis, H. Kucharek, M. Lee, G. Livadiotis, S. Livi, R.J. MacDowall, D. Mitchell, E. Möbius, T. Moore, N.V. Pogorelov, D. Reisenfeld, E. Roelof, L. Saul, N.A. Schwadron, P.W. Valek, R. Vanderspek, P. Wurz, G.P. Zank, First Global Observations of the Interstellar Interaction from the Interstellar Boundary Explorer, submitted to Science, 2009.

  4. A new numerical framework for solving conservation laws: The method of space-time conservation element and solution element

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1991-01-01

    A new numerical framework for solving conservation laws is being developed. It employs: (1) a nontraditional formulation of the conservation laws in which space and time are treated on the same footing, and (2) a nontraditional use of discrete variables such as numerical marching can be carried out by using a set of relations that represents both local and global flux conservation.

  5. Evolving non-thermal electrons in simulations of black hole accretion

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Narayan, Ramesh; Saḑowski, Aleksander

    2017-09-01

    Current simulations of hot accretion flows around black holes assume either a single-temperature gas or, at best, a two-temperature gas with thermal ions and electrons. However, processes like magnetic reconnection and shocks can accelerate electrons into a non-thermal distribution, which will not quickly thermalize at the very low densities found in many systems. Such non-thermal electrons have been invoked to explain the infrared and X-ray spectra and strong variability of Sagittarius A* (Sgr A*), the black hole at the Galactic Center. We present a method for self-consistent evolution of a non-thermal electron population in the general relativistic magnetohydrodynamic code koral. The electron distribution is tracked across Lorentz factor space and is evolved in space and time, in parallel with thermal electrons, thermal ions and radiation. In this study, for simplicity, energy injection into the non-thermal distribution is taken as a fixed fraction of the local electron viscous heating rate. Numerical results are presented for a model with a low mass accretion rate similar to that of Sgr A*. We find that the presence of a non-thermal population of electrons has negligible effect on the overall dynamics of the system. Due to our simple uniform particle injection prescription, the radiative power in the non-thermal simulation is enhanced at large radii. The energy distribution of the non-thermal electrons shows a synchrotron cooling break, with the break Lorentz factor varying with location and time, reflecting the complex interplay between the local viscous heating rate, magnetic field strength and fluid velocity.

  6. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, M.; Leake, J. E.; Schuck, P. W.

    2016-12-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solar activity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of emerging magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution. We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data. These investigations will then be used to outline future prospects and challenges for using observed photospheric data to drive such solar atmospheric simulations. This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  7. Research at the Crossroads: How Intellectual Initiatives across Disciplines Evolve

    ERIC Educational Resources Information Center

    Frost, Susan H.; Jean, Paul M.; Teodorescu, Daniel; Brown, Amy B.

    2004-01-01

    How do intellectual initiatives across disciplines evolve? This qualitative case study of 11 interdisciplinary research initiatives at Emory University identifies key factors in their development: the passionate commitments of scholarly leaders, the presence of strong collegial networks, access to timely and multiple resources, flexible practices,…

  8. Examining faculty awards for gender equity and evolving values.

    PubMed

    Abbuhl, Stephanie; Bristol, Mirar N; Ashfaq, Hera; Scott, Patricia; Tuton, Lucy Wolf; Cappola, Anne R; Sonnad, Seema S

    2010-01-01

    Awards given to medical school faculty are one important mechanism for recognizing what is valued in academic medicine. There have been concerns expressed about the gender distribution of awards, and there is also a growing appreciation for the evolving accomplishments and talents that define academic excellence in the 21st century and that should be considered worthy of award recognition. Examine faculty awards at our institution for gender equity and evolving values. Recipient data were collected on awards from 1996 to 2007 inclusively at the University of Pennsylvania School of Medicine (SOM). Descriptions of each award also were collected. The female-to-male ratio of award recipients over the time span was reviewed for changes and trends. The title and text of each award announcement were reviewed to determine if the award represented a traditional or a newer concept of excellence in academic medicine. There were 21 annual awards given to a total of 59 clinical award recipients, 60 research award recipients, and 154 teaching award recipients. Women received 28% of research awards, 29% of teaching awards and 10% of clinical awards. Gender distribution of total awards was similar to that of SOM full-time faculty except in the clinical awards category. Only one award reflected a shift in the culture of individual achievement to one of collaboration and team performance. Examining both the recipients and content of awards is important to assure they reflect the current composition of diverse faculty and the evolving ideals of leadership and excellence in academic medicine.

  9. Examining Faculty Awards for Gender Equity and Evolving Values

    PubMed Central

    Abbuhl, Stephanie; Bristol, Mirar N.; Ashfaq, Hera; Scott, Patricia; Tuton, Lucy Wolf; Cappola, Anne R.

    2009-01-01

    ABSTRACT BACKGROUND Awards given to medical school faculty are one important mechanism for recognizing what is valued in academic medicine. There have been concerns expressed about the gender distribution of awards, and there is also a growing appreciation for the evolving accomplishments and talents that define academic excellence in the 21st century and that should be considered worthy of award recognition. OBJECTIVE Examine faculty awards at our institution for gender equity and evolving values. METHODS Recipient data were collected on awards from 1996 to 2007 inclusively at the University of Pennsylvania School of Medicine (SOM). Descriptions of each award also were collected. The female-to-male ratio of award recipients over the time span was reviewed for changes and trends. The title and text of each award announcement were reviewed to determine if the award represented a traditional or a newer concept of excellence in academic medicine. MAIN RESULTS There were 21 annual awards given to a total of 59 clinical award recipients, 60 research award recipients, and 154 teaching award recipients. Women received 28% of research awards, 29% of teaching awards and 10% of clinical awards. Gender distribution of total awards was similar to that of SOM full-time faculty except in the clinical awards category. Only one award reflected a shift in the culture of individual achievement to one of collaboration and team performance. CONCLUSION Examining both the recipients and content of awards is important to assure they reflect the current composition of diverse faculty and the evolving ideals of leadership and excellence in academic medicine. PMID:19727968

  10. Resumming the large-N approximation for time evolving quantum systems

    NASA Astrophysics Data System (ADS)

    Mihaila, Bogdan; Dawson, John F.; Cooper, Fred

    2001-05-01

    In this paper we discuss two methods of resumming the leading and next to leading order in 1/N diagrams for the quartic O(N) model. These two approaches have the property that they preserve both boundedness and positivity for expectation values of operators in our numerical simulations. These approximations can be understood either in terms of a truncation to the infinitely coupled Schwinger-Dyson hierarchy of equations, or by choosing a particular two-particle irreducible vacuum energy graph in the effective action of the Cornwall-Jackiw-Tomboulis formalism. We confine our discussion to the case of quantum mechanics where the Lagrangian is L(x,ẋ)=(12)∑Ni=1x˙2i-(g/8N)[∑Ni=1x2i- r20]2. The key to these approximations is to treat both the x propagator and the x2 propagator on similar footing which leads to a theory whose graphs have the same topology as QED with the x2 propagator playing the role of the photon. The bare vertex approximation is obtained by replacing the exact vertex function by the bare one in the exact Schwinger-Dyson equations for the one and two point functions. The second approximation, which we call the dynamic Debye screening approximation, makes the further approximation of replacing the exact x2 propagator by its value at leading order in the 1/N expansion. These two approximations are compared with exact numerical simulations for the quantum roll problem. The bare vertex approximation captures the physics at large and modest N better than the dynamic Debye screening approximation.

  11. What Technology? Reflections on Evolving Services

    ERIC Educational Resources Information Center

    Collins, Sharon

    2009-01-01

    Each year, the members of the EDUCAUSE Evolving Technologies Committee identify and research the evolving technologies that are having--or are predicted to have--the most direct impact on higher education institutions. The committee members choose the relevant topics, write white papers, and present their findings at the EDUCAUSE annual…

  12. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  13. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  14. Real time numerical shake prediction incorporating attenuation structure: a case for the 2016 Kumamoto Earthquake

    NASA Astrophysics Data System (ADS)

    Ogiso, M.; Hoshiba, M.; Shito, A.; Matsumoto, S.

    2016-12-01

    Needless to say, heterogeneous attenuation structure is important for ground motion prediction, including earthquake early warning, that is, real time ground motion prediction. Hoshiba and Ogiso (2015, AGU Fall meeting) showed that the heterogeneous attenuation and scattering structure will lead to earlier and more accurate ground motion prediction in the numerical shake prediction scheme proposed by Hoshiba and Aoki (2015, BSSA). Hoshiba and Ogiso (2015) used assumed heterogeneous structure, and we discuss the effect of them in the case of 2016 Kumamoto Earthquake, using heterogeneous structure estimated by actual observation data. We conducted Multiple Lapse Time Window Analysis (Hoshiba, 1993, JGR) to the seismic stations located on western part of Japan to estimate heterogeneous attenuation and scattering structure. The characteristics are similar to the previous work of Carcole and Sato (2010, GJI), e.g. strong intrinsic and scattering attenuation around the volcanoes located on the central part of Kyushu, and relatively weak heterogeneities in the other area. Real time ground motion prediction simulation for the 2016 Kumamoto Earthquake was conducted using the numerical shake prediction scheme with 474 strong ground motion stations. Comparing the snapshot of predicted and observed wavefield showed a tendency for underprediction around the volcanic area in spite of the heterogeneous structure. These facts indicate the necessity of improving the heterogeneous structure for the numerical shake prediction scheme.In this study, we used the waveforms of Hi-net, K-NET, KiK-net stations operated by the NIED for estimating structure and conducting ground motion prediction simulation. Part of this study was supported by the Earthquake Research Institute, the University of Tokyo cooperative research program and JSPS KAKENHI Grant Number 25282114.

  15. Distinct developmental genetic mechanisms underlie convergently evolved tooth gain in sticklebacks

    PubMed Central

    Ellis, Nicholas A.; Glazer, Andrew M.; Donde, Nikunj N.; Cleves, Phillip A.; Agoglia, Rachel M.; Miller, Craig T.

    2015-01-01

    Teeth are a classic model system of organogenesis, as repeated and reciprocal epithelial and mesenchymal interactions pattern placode formation and outgrowth. Less is known about the developmental and genetic bases of tooth formation and replacement in polyphyodonts, which are vertebrates with continual tooth replacement. Here, we leverage natural variation in the threespine stickleback fish Gasterosteus aculeatus to investigate the genetic basis of tooth development and replacement. We find that two derived freshwater stickleback populations have both convergently evolved more ventral pharyngeal teeth through heritable genetic changes. In both populations, evolved tooth gain manifests late in development. Using pulse-chase vital dye labeling to mark newly forming teeth in adult fish, we find that both high-toothed freshwater populations have accelerated tooth replacement rates relative to low-toothed ancestral marine fish. Despite the similar evolved phenotype of more teeth and an accelerated adult replacement rate, the timing of tooth number divergence and the spatial patterns of newly formed adult teeth are different in the two populations, suggesting distinct developmental mechanisms. Using genome-wide linkage mapping in marine-freshwater F2 genetic crosses, we find that the genetic basis of evolved tooth gain in the two freshwater populations is largely distinct. Together, our results support a model whereby increased tooth number and an accelerated tooth replacement rate have evolved convergently in two independently derived freshwater stickleback populations using largely distinct developmental and genetic mechanisms. PMID:26062935

  16. Dynamical Approach Study of Spurious Steady-State Numerical Solutions of Nonlinear Differential Equations. Part 2; Global Asymptotic Behavior of Time Discretizations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1995-01-01

    The global asymptotic nonlinear behavior of 11 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDEs.

  17. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  18. Evolved atmospheric entry corridor with safety factor

    NASA Astrophysics Data System (ADS)

    Liang, Zixuan; Ren, Zhang; Li, Qingdong

    2018-02-01

    Atmospheric entry corridors are established in previous research based on the equilibrium glide condition which assumes the flight-path angle to be zero. To get a better understanding of the highly constrained entry flight, an evolved entry corridor that considers the exact flight-path angle is developed in this study. Firstly, the conventional corridor in the altitude vs. velocity plane is extended into a three-dimensional one in the space of altitude, velocity, and flight-path angle. The three-dimensional corridor is generated by a series of constraint boxes. Then, based on a simple mapping method, an evolved two-dimensional entry corridor with safety factor is obtained. The safety factor is defined to describe the flexibility of the flight-path angle for a state within the corridor. Finally, the evolved entry corridor is simulated for the Space Shuttle and the Common Aero Vehicle (CAV) to demonstrate the effectiveness of the corridor generation approach. Compared with the conventional corridor, the evolved corridor is much wider and provides additional information. Therefore, the evolved corridor would benefit more to the entry trajectory design and analysis.

  19. Evolvable Neural Software System

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  20. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  1. Time-dependent behavior of porcine periodontal ligament: A combined experimental, numeric in-vitro study.

    PubMed

    Knaup, Thomas Johannes; Dirk, Cornelius; Reimann, Susanne; Keilig, Ludger; Eschbach, Meike; Korbmacher-Steiner, Heike; Bourauel, Christoph

    2018-01-01

    The aim of this study was to analyze the time-dependent in-vitro behavior of the periodontal ligament (PDL) by determining the material parameters using specimens of porcine jawbone. Time-dependent material parameters to be determined were expected to complement the results from earlier biomechanical studies. Five mandibular deciduous porcine premolars were analyzed in a combined experimental-numeric study. After selecting suitable specimens (excluding root resorption) and preparing the measurement system, the specimens were deflected by a distance of 0.2 mm at loading times of 0.2, 0.5, 1, 2, 5, 10, and 60 seconds. The deflection of the teeth was determined via a laser optical system, and the resulting forces and torques were measured. To create the finite element models, a microcomputed tomography scanner was used to create 3-dimensional x-ray images of the samples. The individual structures (tooth, PDL, bone) of the jaw segments were reconstructed using a self-developed reconstruction program. A comparison between experiment and simulation was conducted using the results from finite element simulations. Via iterative parameter adjustments, the material parameters (Young's modulus and Poisson's ratio) of the PDL were assessed at different loading velocities. The clinically observed effect of a distinct increase in force during very short periods of loading was confirmed. Thus, a force of 2.6 N (±1.5 N) was measured at the shortest stress duration of 0.2 seconds, and a force of 1.0 N (±0.5 N) was measured at the longest stress duration of 60 seconds. The numeric determination of the material parameters showed bilinear behavior with a median value of the first Young's modulus between 0.06 MPa (2 seconds) and 0.04 MPa (60 seconds), and the second Young's modulus between 0.30 MPa (10 seconds) and 0.20 MPa (60 seconds). The ultimate strain marking the transition from the first to the second Young's modulus remained almost unchanged with a median

  2. Evolving phenotypic networks in silico.

    PubMed

    François, Paul

    2014-11-01

    Evolved gene networks are constrained by natural selection. Their structures and functions are consequently far from being random, as exemplified by the multiple instances of parallel/convergent evolution. One can thus ask if features of actual gene networks can be recovered from evolutionary first principles. I review a method for in silico evolution of small models of gene networks aiming at performing predefined biological functions. I summarize the current implementation of the algorithm, insisting on the construction of a proper "fitness" function. I illustrate the approach on three examples: biochemical adaptation, ligand discrimination and vertebrate segmentation (somitogenesis). While the structure of the evolved networks is variable, dynamics of our evolved networks are usually constrained and present many similar features to actual gene networks, including properties that were not explicitly selected for. In silico evolution can thus be used to predict biological behaviours without a detailed knowledge of the mapping between genotype and phenotype. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.

  3. A numerical framework for the direct simulation of dense particulate flow under explosive dispersal

    NASA Astrophysics Data System (ADS)

    Mo, H.; Lien, F.-S.; Zhang, F.; Cronin, D. S.

    2018-05-01

    In this paper, we present a Cartesian grid-based numerical framework for the direct simulation of dense particulate flow under explosive dispersal. This numerical framework is established through the integration of the following numerical techniques: (1) operator splitting for partitioned fluid-solid interaction in the time domain, (2) the second-order SSP Runge-Kutta method and third-order WENO scheme for temporal and spatial discretization of governing equations, (3) the front-tracking method for evolving phase interfaces, (4) a field function proposed for low-memory-cost multimaterial mesh generation and fast collision detection, (5) an immersed boundary method developed for treating arbitrarily irregular and changing boundaries, and (6) a deterministic multibody contact and collision model. Employing the developed framework, this paper further studies particle jet formation under explosive dispersal by considering the effects of particle properties, particulate payload morphologies, and burster pressures. By the simulation of the dispersal processes of dense particle systems driven by pressurized gas, in which the driver pressure reaches 1.01325× 10^{10} Pa (10^5 times the ambient pressure) and particles are impulsively accelerated from stationary to a speed that is more than 12000 m/s within 15 μ s, it is demonstrated that the presented framework is able to effectively resolve coupled shock-shock, shock-particle, and particle-particle interactions in complex fluid-solid systems with shocked flow conditions, arbitrarily irregular particle shapes, and realistic multibody collisions.

  4. Numerical Simulation of Transit-Time Ultrasonic Flowmeters by a Direct Approach.

    PubMed

    Luca, Adrian; Marchiano, Regis; Chassaing, Jean-Camille

    2016-06-01

    This paper deals with the development of a computational code for the numerical simulation of wave propagation through domains with a complex geometry consisting in both solids and moving fluids. The emphasis is on the numerical simulation of ultrasonic flowmeters (UFMs) by modeling the wave propagation in solids with the equations of linear elasticity (ELE) and in fluids with the linearized Euler equations (LEEs). This approach requires high performance computing because of the high number of degrees of freedom and the long propagation distances. Therefore, the numerical method should be chosen with care. In order to minimize the numerical dissipation which may occur in this kind of configuration, the numerical method employed here is the nodal discontinuous Galerkin (DG) method. Also, this method is well suited for parallel computing. To speed up the code, almost all the computational stages have been implemented to run on graphical processing unit (GPU) by using the compute unified device architecture (CUDA) programming model from NVIDIA. This approach has been validated and then used for the two-dimensional simulation of gas UFMs. The large contrast of acoustic impedance characteristic to gas UFMs makes their simulation a real challenge.

  5. Nonlinear Dynamics in Gene Regulation Promote Robustness and Evolvability of Gene Expression Levels.

    PubMed

    Steinacher, Arno; Bates, Declan G; Akman, Ozgur E; Soyer, Orkun S

    2016-01-01

    Cellular phenotypes underpinned by regulatory networks need to respond to evolutionary pressures to allow adaptation, but at the same time be robust to perturbations. This creates a conflict in which mutations affecting regulatory networks must both generate variance but also be tolerated at the phenotype level. Here, we perform mathematical analyses and simulations of regulatory networks to better understand the potential trade-off between robustness and evolvability. Examining the phenotypic effects of mutations, we find an inverse correlation between robustness and evolvability that breaks only with nonlinearity in the network dynamics, through the creation of regions presenting sudden changes in phenotype with small changes in genotype. For genotypes embedding low levels of nonlinearity, robustness and evolvability correlate negatively and almost perfectly. By contrast, genotypes embedding nonlinear dynamics allow expression levels to be robust to small perturbations, while generating high diversity (evolvability) under larger perturbations. Thus, nonlinearity breaks the robustness-evolvability trade-off in gene expression levels by allowing disparate responses to different mutations. Using analytical derivations of robustness and system sensitivity, we show that these findings extend to a large class of gene regulatory network architectures and also hold for experimentally observed parameter regimes. Further, the effect of nonlinearity on the robustness-evolvability trade-off is ensured as long as key parameters of the system display specific relations irrespective of their absolute values. We find that within this parameter regime genotypes display low and noisy expression levels. Examining the phenotypic effects of mutations, we find an inverse correlation between robustness and evolvability that breaks only with nonlinearity in the network dynamics. Our results provide a possible solution to the robustness-evolvability trade-off, suggest an explanation for

  6. A simple numerical model for predicting organic matter decomposition in a fed-batch composting operation.

    PubMed

    Nakasaki, Kiyohiko; Ohtaki, Akihito

    2002-01-01

    Using dog food as a model of the organic waste that comprises composting raw material, the degradation pattern of organic materials was examined by continuously measuring the quantity of CO2 evolved during the composting process in both batch and fed-batch operations. A simple numerical model was made on the basis of three suppositions for describing the organic matter decomposition in the batch operation. First, a certain quantity of carbon in the dog food was assumed to be recalcitrant to degradation in the composting reactor within the retention time allowed. Second, it was assumed that the decomposition rate of carbon is proportional to the quantity of easily degradable carbon, that is, the carbon recalcitrant to degradation was subtracted from the total carbon remaining in the dog food. Third, a certain lag time is assumed to occur before the start of active decomposition of organic matter in the dog food; this lag corresponds to the time required for microorganisms to proliferate and become active. It was then ascertained that the decomposition pattern for the organic matter in the dog food during the fed-batch operation could be predicted by the numerical model with the parameters obtained from the batch operation. This numerical model was modified so that the change in dry weight of composting materials could be obtained. The modified model was found suitable for describing the organic matter decomposition pattern in an actual fed-batch composting operation of the garbage obtained from a restaurant, approximately 10 kg d(-1) loading for 60 d.

  7. LABORATORY AND NUMERICAL INVESTIGATIONS OF RESIDENCE TIME DISTRIBUTION OF FLUIDS IN LAMINAR FLOW STIRRED ANNULAR PHOTOREACTOR

    EPA Science Inventory

    Laboratory and Numerical Investigations of Residence Time Distribution of Fluids in Laminar Flow Stirred Annular Photoreactor

    E. Sahle-Demessie1, Siefu Bekele2, U. R. Pillai1

    1U.S. EPA, National Risk Management Research Laboratory
    Sustainable Technology Division,...

  8. Evolving user needs and late-mover advantage

    PubMed Central

    Querbes, Adrien; Frenken, Koen

    2016-01-01

    We propose a generalized NK-model of late-mover advantage where late-mover firms leapfrog first-mover firms as user needs evolve over time. First movers face severe trade-offs between the provision of functionalities in which their products already excel and the additional functionalities requested by users later on. Late movers, by contrast, start searching when more functionalities are already known and typically come up with superior product designs. We also show that late-mover advantage is more probable for more complex technologies. Managerial implications follow. PMID:28596705

  9. The influence of time units on the flexibility of the spatial numerical association of response codes effect.

    PubMed

    Zhao, Tingting; He, Xianyou; Zhao, Xueru; Huang, Jianrui; Zhang, Wei; Wu, Shuang; Chen, Qi

    2018-05-01

    The Spatial Numerical/Temporal Association of Response Codes (SNARC/STEARC) effects are considered evidence of the association between number or time and space, respectively. As the SNARC effect was proposed by Dehaene, Bossini, and Giraux in 1993, several studies have suggested that different tasks and cultural factors can affect the flexibility of the SNARC effect. This study explored the influence of time units on the flexibility of the SNARC effect via materials with Arabic numbers, which were suffixed with time units and subjected to magnitude comparison tasks. Experiment 1 replicated the SNARC effect for numbers and the STEARC effect for time units. Experiment 2 explored the flexibility of the SNARC effect when numbers were attached to time units, which either conflicted with the numerical magnitude or in which the time units were the same or different. Experiment 3 explored whether the SNARC effect of numbers was stable when numbers were near the transition of two adjacent time units. The results indicate that the SNARC effect was flexible when the numbers were suffixed with time units: Time units influenced the direction of the SNARC effect in a way which could not be accounted for by the mathematical differences between the time units and numbers. This suggests that the SNARC effect is not obligatory and can be easily adapted or inhibited based on the current context. © 2017 The Authors. British Journal of Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  10. Evolving Gravitationally Unstable Disks over Cosmic Time: Implications for Thick Disk Formation

    NASA Astrophysics Data System (ADS)

    Forbes, John; Krumholz, Mark; Burkert, Andreas

    2012-07-01

    Observations of disk galaxies at z ~ 2 have demonstrated that turbulence driven by gravitational instability can dominate the energetics of the disk. We present a one-dimensional simulation code, which we have made publicly available, that economically evolves these galaxies from z ~ 2 to z ~ 0 on a single CPU in a matter of minutes, tracking column density, metallicity, and velocity dispersions of gaseous and multiple stellar components. We include an H2-regulated star formation law and the effects of stellar heating by transient spiral structure. We use this code to demonstrate a possible explanation for the existence of a thin and thick disk stellar population and the age-velocity-dispersion correlation of stars in the solar neighborhood: the high velocity dispersion of gas in disks at z ~ 2 decreases along with the cosmological accretion rate, while at lower redshift the dynamically colder gas forms the low velocity dispersion stars of the thin disk.

  11. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  12. Dynamics of Large Systems of Nonlinearly Evolving Units

    NASA Astrophysics Data System (ADS)

    Lu, Zhixin

    The dynamics of large systems of many nonlinearly evolving units is a general research area that has great importance for many areas in science and technology, including biology, computation by artificial neural networks, statistical mechanics, flocking in animal groups, the dynamics of coupled neurons in the brain, and many others. While universal principles and techniques are largely lacking in this broad area of research, there is still one particular phenomenon that seems to be broadly applicable. In particular, this is the idea of emergence, by which is meant macroscopic behaviors that "emerge" from a large system of many "smaller or simpler entities such that...large entities" [i.e., macroscopic behaviors] arise which "exhibit properties the smaller/simpler entities do not exhibit." In this thesis we investigate mechanisms and manifestations of emergence in four dynamical systems consisting many nonlinearly evolving units. These four systems are as follows. (a) We first study the motion of a large ensemble of many noninteracting particles in a slowly changing Hamiltonian system that undergoes a separatrix crossing. In such systems, we find that separatrix-crossing induces a counterintuitive effect. Specifically, numerical simulation of two sets of densely sprinkled initial conditions on two energy curves appears to suggest that the two energy curves, one originally enclosing the other, seemingly interchange their positions. This, however, is topologically forbidden. We resolve this paradox by introducing a numerical simulation method we call "robust" and study its consequences. (b) We next study the collective dynamics of oscillatory pacemaker neurons in Suprachiasmatic Nucleus (SCN), which, through synchrony, govern the circadian rhythm of mammals. We start from a high-dimensional description of the many coupled oscillatory neuronal units within the SCN. This description is based on a forced Kuramoto model. We then reduce the system dimensionality by using

  13. Numerically Exact Long Time Magnetization Dynamics Near the Nonequilibrium Kondo Regime

    NASA Astrophysics Data System (ADS)

    Cohen, Guy; Gull, Emanuel; Reichman, David; Millis, Andrew; Rabani, Eran

    2013-03-01

    The dynamical and steady-state spin response of the nonequilibrium Anderson impurity model to magnetic fields, bias voltages, and temperature is investigated by a numerically exact method which allows access to unprecedentedly long times. The method is based on using real, continuous time bold Monte Carlo techniques--quantum Monte Carlo sampling of diagrammatic corrections to a partial re-summation--in order to compute the kernel of a memory function, which is then used to determine the reduced density matrix. The method owes its effectiveness to the fact that the memory kernel is dominated by relatively short-time properties even when the system's dynamics are long-ranged. We make predictions regarding the non-monotonic temperature dependence of the system at high bias voltage and the oscillatory quench dynamics at high magnetic fields. We also discuss extensions of the method to the computation of transport properties and correlation functions, and its suitability as an impurity solver free from the need for analytical continuation in the context of dynamical mean field theory. This work is supported by the US Department of Energy under grant DE-SC0006613, by NSF-DMR-1006282 and by the US-Israel Binational Science Foundation. GC is grateful to the Yad Hanadiv-Rothschild Foundation for the award of a Rothschild Fellowship.

  14. Evolving Choice Inconsistencies in Choice of Prescription Drug Insurance

    PubMed Central

    ABALUCK, JASON

    2017-01-01

    We study choice over prescription insurance plans by the elderly using government administrative data to evaluate how these choices evolve over time. We find large “foregone savings” from not choosing the lowest cost plan that has grown over time. We develop a structural framework to decompose the changes in “foregone welfare” from inconsistent choices into choice set changes and choice function changes from a fixed choice set. We find that foregone welfare increases over time due primarily to changes in plan characteristics such as premiums and out-of-pocket costs; we estimate little learning at either the individual or cohort level. PMID:29104294

  15. Visualization of evolving laser-generated structures by frequency domain tomography

    NASA Astrophysics Data System (ADS)

    Chang, Yenyu; Li, Zhengyan; Wang, Xiaoming; Zgadzaj, Rafal; Downer, Michael

    2011-10-01

    We introduce frequency domain tomography (FDT) for single-shot visualization of time-evolving refractive index structures (e.g. laser wakefields, nonlinear index structures) moving at light-speed. Previous researchers demonstrated single-shot frequency domain holography (FDH), in which a probe-reference pulse pair co- propagates with the laser-generated structure, to obtain snapshot-like images. However, in FDH, information about the structure's evolution is averaged. To visualize an evolving structure, we use several frequency domain streak cameras (FDSCs), in each of which a probe-reference pulse pair propagates at an angle to the propagation direction of the laser-generated structure. The combination of several FDSCs constitutes the FDT system. We will present experimental results for a 4-probe FDT system that has imaged the whole-beam self-focusing of a pump pulse propagating through glass in a single laser shot. Combining temporal and angle multiplexing methods, we successfully processed data from four probe pulses in one spectrometer in a single-shot. The output of data processing is a multi-frame movie of the self- focusing pulse. Our results promise the possibility of visualizing evolving laser wakefield structures that underlie laser-plasma accelerators used for multi-GeV electron acceleration.

  16. Effortful Control, Explicit Processing, and the Regulation of Human Evolved Predispositions

    ERIC Educational Resources Information Center

    MacDonald, Kevin B.

    2008-01-01

    This article analyzes the effortful control of automatic processing related to social and emotional behavior, including control over evolved modules designed to solve problems of survival and reproduction that were recurrent over evolutionary time. The inputs to effortful control mechanisms include a wide range of nonrecurrent…

  17. Numerical classification of coding sequences

    NASA Technical Reports Server (NTRS)

    Collins, D. W.; Liu, C. C.; Jukes, T. H.

    1992-01-01

    DNA sequences coding for protein may be represented by counts of nucleotides or codons. A complete reading frame may be abbreviated by its base count, e.g. A76C158G121T74, or with the corresponding codon table, e.g. (AAA)0(AAC)1(AAG)9 ... (TTT)0. We propose that these numerical designations be used to augment current methods of sequence annotation. Because base counts and codon tables do not require revision as knowledge of function evolves, they are well-suited to act as cross-references, for example to identify redundant GenBank entries. These descriptors may be compared, in place of DNA sequences, to extract homologous genes from large databases. This approach permits rapid searching with good selectivity.

  18. Numerical method for accessing the universal scaling function for a multiparticle discrete time asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Chia, Nicholas; Bundschuh, Ralf

    2005-11-01

    In the universality class of the one-dimensional Kardar-Parisi-Zhang (KPZ) surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since and Derrida and Lebowitz’s original publication [Phys. Rev. Lett. 80, 209 (1998)] this universality has been verified for a variety of continuous-time, periodic-boundary systems in the KPZ universality class. Here, we present a numerical method for directly examining the entire particle flux of the asymmetric exclusion process (ASEP), thus providing an alternative to more difficult cumulant ratios studies. Using this method, we find that the Derrida-Lebowitz scaling function (DLSF) properly characterizes the large-system-size limit (N→∞) of a single-particle discrete time system, even in the case of very small system sizes (N⩽22) . This fact allows us to not only verify that the DLSF properly characterizes multiple-particle discrete-time asymmetric exclusion processes, but also provides a way to numerically solve for quantities of interest, such as the particle hopping flux. This method can thus serve to further increase the ease and accessibility of studies involving even more challenging dynamics, such as the open-boundary ASEP.

  19. Mining the HST Treasury: The ASTRAL Reference Spectra for Evolved M Stars

    NASA Technical Reports Server (NTRS)

    Carpenter, K. G.; Ayres, T.; Harper, G.; Kober, G.; Wahlgren, G. M.

    2012-01-01

    The "Advanced Spectral Library (ASTRAL) Project: Cool Stars" (PI = T. Ayres) is an HST Cycle 18 Treasury Program designed to collect a definitive set of representative, high-resolution (R greater than 100,000) and high signal/noise (S/N greater than 100) UV spectra of eight F-M evolved cool stars. These extremely high-quality STIS UV echelle spectra are available from the HST archive and through the University of Colorado (http://casa.colorado.edu/ayres/ASTRAL/) portal and will enable investigations of a broad range of problems -- stellar, interstellar. and beyond -- for many years. In this current paper, we concentrate on producing a roadrnap to the very rich spectra of the two evolved M stars in the sample, the M3.4 giant Gamma Crucis (GaCrux) and the M2Iab supergiant Alpha Orionis (Betelgeuse) and illustrate the huge increase in coverage and quality that these spectra provide over that previously available from IUE and earlier HST observations. These roadmaps will facilitate the study of the spectra, outer atmospheres, and winds of not only these stars. but also numerous other cool, low-gravity stars and make a very interesting comparison to the already-available atlases of the K2III giant Arcturus.

  20. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  1. Revisiting Robustness and Evolvability: Evolution in Weighted Genotype Spaces

    PubMed Central

    Partha, Raghavendran; Raman, Karthik

    2014-01-01

    Robustness and evolvability are highly intertwined properties of biological systems. The relationship between these properties determines how biological systems are able to withstand mutations and show variation in response to them. Computational studies have explored the relationship between these two properties using neutral networks of RNA sequences (genotype) and their secondary structures (phenotype) as a model system. However, these studies have assumed every mutation to a sequence to be equally likely; the differences in the likelihood of the occurrence of various mutations, and the consequence of probabilistic nature of the mutations in such a system have previously been ignored. Associating probabilities to mutations essentially results in the weighting of genotype space. We here perform a comparative analysis of weighted and unweighted neutral networks of RNA sequences, and subsequently explore the relationship between robustness and evolvability. We show that assuming an equal likelihood for all mutations (as in an unweighted network), underestimates robustness and overestimates evolvability of a system. In spite of discarding this assumption, we observe that a negative correlation between sequence (genotype) robustness and sequence evolvability persists, and also that structure (phenotype) robustness promotes structure evolvability, as observed in earlier studies using unweighted networks. We also study the effects of base composition bias on robustness and evolvability. Particularly, we explore the association between robustness and evolvability in a sequence space that is AU-rich – sequences with an AU content of 80% or higher, compared to a normal (unbiased) sequence space. We find that evolvability of both sequences and structures in an AU-rich space is lesser compared to the normal space, and robustness higher. We also observe that AU-rich populations evolving on neutral networks of phenotypes, can access less phenotypic variation compared to

  2. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  3. Numerical weather prediction in low latitudes

    NASA Technical Reports Server (NTRS)

    Krishnamurti, T. N.

    1985-01-01

    Based on the results of a number of numerical prediction experiments, the differential heating between land and ocean is an important and critical factor for investigation of phenomenon such as the onset of monsoons over the Indian subcontinent. The pre-onset period during the month of May shows a rather persistent flow field in the monsoon region. At low levels the circulation exhibits anticyclonic excursions over the Arabian Sea, flowing essentially parallel to the west coast of India from the north. Over the Indian subcontinent the major feature is a shallow heat low over northern India. As the heat sources commence a rapid northwestward movement toward the southern edge of the Tibetan Plateau, an interesting configuration of the large-scale divergent circulation occurs. A favorable configuration for a rapid exchange of energy from the divergent to the rotational kinetic energy develops. Strong low level monsoonal circulations evolve, attendant with that the onset of monsoon rains occurs. In order to test this observational sequence, a series of short-range numerical prediction experiments were initiated to define the initial heat sources.

  4. Origins of multicellular evolvability in snowflake yeast

    PubMed Central

    Ratcliff, William C.; Fankhauser, Johnathon D.; Rogers, David W.; Greig, Duncan; Travisano, Michael

    2015-01-01

    Complex life has arisen through a series of ‘major transitions’ in which collectives of formerly autonomous individuals evolve into a single, integrated organism. A key step in this process is the origin of higher-level evolvability, but little is known about how higher-level entities originate and gain the capacity to evolve as an individual. Here we report a single mutation that not only creates a new level of biological organization, but also potentiates higher-level evolvability. Disrupting the transcription factor ACE2 in Saccharomyces cerevisiae prevents mother–daughter cell separation, generating multicellular ‘snowflake’ yeast. Snowflake yeast develop through deterministic rules that produce geometrically defined clusters that preclude genetic conflict and display a high broad-sense heritability for multicellular traits; as a result they are preadapted to multicellular adaptation. This work demonstrates that simple microevolutionary changes can have profound macroevolutionary consequences, and suggests that the formation of clonally developing clusters may often be the first step to multicellularity. PMID:25600558

  5. Vibrationally excited water emission at 658 GHz from evolved stars

    NASA Astrophysics Data System (ADS)

    Baudry, A.; Humphreys, E. M. L.; Herpin, F.; Torstensson, K.; Vlemmings, W. H. T.; Richards, A. M. S.; Gray, M. D.; De Breuck, C.; Olberg, M.

    2018-01-01

    Context. Several rotational transitions of ortho- and para-water have been identified toward evolved stars in the ground vibrational state as well as in the first excited state of the bending mode (v2 = 1 in (0, 1, 0) state). In the latter vibrational state of water, the 658 GHz J = 11,0-10,1 rotational transition is often strong and seems to be widespread in late-type stars. Aims: Our main goals are to better characterize the nature of the 658 GHz emission, compare the velocity extent of the 658 GHz emission with SiO maser emission to help locate the water layers and, more generally, investigate the physical conditions prevailing in the excited water layers of evolved stars. Another goal is to identify new 658 GHz emission sources and contribute in showing that this emission is widespread in evolved stars. Methods: We have used the J = 11,0-10,1 rotational transition of water in the (0, 1, 0) vibrational state nearly 2400 K above the ground-state to trace some of the physical conditions of evolved stars. Eleven evolved stars were extracted from our mini-catalog of existing and potential 658 GHz sources for observations with the Atacama Pathfinder EXperiment (APEX) telescope equipped with the SEPIA Band 9 receiver. The 13CO J = 6-5 line at 661 GHz was placed in the same receiver sideband for simultaneous observation with the 658 GHz line of water. We have compared the ratio of these two lines to the same ratio derived from HIFI earlier observations to check for potential time variability in the 658 GHz line. We have compared the 658 GHz line properties with our H2O radiative transfer models in stars and we have compared the velocity ranges of the 658 GHz and SiO J = 2-1, v = 1 maser lines. Results: Eleven stars have been extracted from our catalog of known or potential 658 GHz evolved stars. All of them show 658 GHz emission with a peak flux density in the range ≈50-70 Jy (RU Hya and RT Eri) to ≈2000-3000 Jy (VY CMa and W Hya). Five Asymptotic Giant Branch (AGB

  6. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  7. Investigating flow sensitivity of Greenland outlet glaciers using a time-evolving calving model in Elmer FEM.

    NASA Astrophysics Data System (ADS)

    Todd, Joe; Christoffersen, Poul

    2013-04-01

    It is becoming increasingly evident that the marine margins of the Greenland Ice Sheet (GIS) are highly sensitive to local and regional scale climate change, with significant changes in mass balance occurring on sub-decadal timescales. The majority of this mass loss is hypothesised to have been triggered at the termini of calving glaciers. Recent studies suggest that increased calving rate is being driven through some combination of increased submarine undercutting, increased surface hydrofracturing, and changes in the strength and seasonal duration of sikussak. This project aims to improve understanding of these physical processes, in order to better predict how the GIS will respond to future climate change. Two glaciers in the Uummannaq region, Store Gletscher and Rink Isbræ, have been modelled in 2D using the Finite Element modelling package "Elmer FEM". The model produces a time-evolving solution to the coupled Navier-Stokes/heat equations; this allows the dynamic response of these glaciers to external forcing at their termini to be investigated. Furthermore, the model includes a water-depth calving criterion, and is able to simulate realistic calving events, and the subsequent stress/dynamic response of the glacier. Preliminary results suggest that both sikussak backstress and submarine undercutting may represent significant factors in calving terminus stability.

  8. Sensitivity of geomagnetic reversal rate on core evolution from numerical dynamos

    NASA Astrophysics Data System (ADS)

    Driscoll, P. E.; Davies, C. J.

    2017-12-01

    The paleomagnetic record indicates the geodynamo has evolved from frequently reversing to non-reversing (superchron) magnetic states several times over the Phanerozoic. Previous theoretical studies demonstrated a positive correlation between magnetic reversal rate and core-mantle boundary heat flux. However, attempts to identify such a correlation between reversal rates and proxies for internal cooling rate, such as plume events, superchron cycles, and subduction rates, have been inconclusive. Here we revisit the magnetic reversal occurrence rate in numerical dynamos at low Ekman numbers (faster rotation) and high magnetic Prandtl numbers (ratio of viscous and magnetic diffusivities). We focus on how the correlation between reversal rate and convective power depends on the core evolution rate and on other factors, such as Ek, Pm, and thermal boundary conditions. We apply our results to the seafloor reversal record in an attempt to infer the energetic evolution of the lower mantle and core over that period.

  9. The genotype-phenotype map of an evolving digital organism.

    PubMed

    Fortuna, Miguel A; Zaman, Luis; Ofria, Charles; Wagner, Andreas

    2017-02-01

    To understand how evolving systems bring forth novel and useful phenotypes, it is essential to understand the relationship between genotypic and phenotypic change. Artificial evolving systems can help us understand whether the genotype-phenotype maps of natural evolving systems are highly unusual, and it may help create evolvable artificial systems. Here we characterize the genotype-phenotype map of digital organisms in Avida, a platform for digital evolution. We consider digital organisms from a vast space of 10141 genotypes (instruction sequences), which can form 512 different phenotypes. These phenotypes are distinguished by different Boolean logic functions they can compute, as well as by the complexity of these functions. We observe several properties with parallels in natural systems, such as connected genotype networks and asymmetric phenotypic transitions. The likely common cause is robustness to genotypic change. We describe an intriguing tension between phenotypic complexity and evolvability that may have implications for biological evolution. On the one hand, genotypic change is more likely to yield novel phenotypes in more complex organisms. On the other hand, the total number of novel phenotypes reachable through genotypic change is highest for organisms with simple phenotypes. Artificial evolving systems can help us study aspects of biological evolvability that are not accessible in vastly more complex natural systems. They can also help identify properties, such as robustness, that are required for both human-designed artificial systems and synthetic biological systems to be evolvable.

  10. The genotype-phenotype map of an evolving digital organism

    PubMed Central

    Zaman, Luis; Wagner, Andreas

    2017-01-01

    To understand how evolving systems bring forth novel and useful phenotypes, it is essential to understand the relationship between genotypic and phenotypic change. Artificial evolving systems can help us understand whether the genotype-phenotype maps of natural evolving systems are highly unusual, and it may help create evolvable artificial systems. Here we characterize the genotype-phenotype map of digital organisms in Avida, a platform for digital evolution. We consider digital organisms from a vast space of 10141 genotypes (instruction sequences), which can form 512 different phenotypes. These phenotypes are distinguished by different Boolean logic functions they can compute, as well as by the complexity of these functions. We observe several properties with parallels in natural systems, such as connected genotype networks and asymmetric phenotypic transitions. The likely common cause is robustness to genotypic change. We describe an intriguing tension between phenotypic complexity and evolvability that may have implications for biological evolution. On the one hand, genotypic change is more likely to yield novel phenotypes in more complex organisms. On the other hand, the total number of novel phenotypes reachable through genotypic change is highest for organisms with simple phenotypes. Artificial evolving systems can help us study aspects of biological evolvability that are not accessible in vastly more complex natural systems. They can also help identify properties, such as robustness, that are required for both human-designed artificial systems and synthetic biological systems to be evolvable. PMID:28241039

  11. The Effects of Insulator Wall Material on Hall Thruster Discharges: A Numerical Study

    DTIC Science & Technology

    2001-01-03

    An investigation was undertaken to determine how the choice of insulator wall material inside a Hall thruster discharge channel might affect thruster operation. In order to study this, an evolved hybrid particle-in-cell (PIC) numerical Hall thruster model, HPHall, was used. HPHall solves a set of quasi-one-dimensional fluid equations for electrons and tracks heavy particles using a PIC method.

  12. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  13. Reconstructing the Morphology of an Evolving Coronal Mass Ejection

    DTIC Science & Technology

    2009-01-01

    694, 707 Wood, B. E., Howard, R. A ., Thernisien, A ., Plunkett, S. P., & Socker, D. G. 2009b, Sol. Phys., 259, 163 Wood, B. E., Karovska , M., Chen, J...Reconstructing the Morphology of an Evolving Coronal Mass Ejection B. E. Wood, R. A . Howard, D. G. Socker Naval Research Laboratory, Space Science...mission, we empirically reconstruct the time-dependent three-dimensional morphology of a coronal mass ejection (CME) from 2008 June 1, which exhibits

  14. Apollo 16 Evolved Lithology Sodic Ferrogabbro

    NASA Technical Reports Server (NTRS)

    Zeigler, Ryan; Jolliff, B. L.; Korotev, R. L.

    2014-01-01

    Evolved lunar igneous lithologies, often referred to as the alkali suite, are a minor but important component of the lunar crust. These evolved samples are incompatible-element rich samples, and are, not surprisingly, most common in the Apollo sites in (or near) the incompatible-element rich region of the Moon known as the Procellarum KREEP Terrane (PKT). The most commonly occurring lithologies are granites (A12, A14, A15, A17), monzogabbro (A14, A15), alkali anorthosites (A12, A14), and KREEP basalts (A15, A17). The Feldspathic Highlands Terrane is not entirely devoid of evolved lithologies, and rare clasts of alkali gabbronorite and sodic ferrogabbro (SFG) have been identified in Apollo 16 station 11 breccias 67915 and 67016. Curiously, nearly all pristine evolved lithologies have been found as small clasts or soil particles, exceptions being KREEP basalts 15382/6 and granitic sample 12013 (which is itself a breccia). Here we reexamine the petrography and geochemistry of two SFG-like particles found in a survey of Apollo 16 2-4 mm particles from the Cayley Plains 62283,7-15 and 62243,10-3 (hereafter 7-15 and 10-3 respectively). We will compare these to previously reported SFG samples, including recent analyses on the type specimen of SFG from lunar breccia 67915.

  15. The Numerical Simulation of Time Dependent Flow Structures Over a Natural Gravel Surface.

    NASA Astrophysics Data System (ADS)

    Hardy, R. J.; Lane, S. N.; Ferguson, R. I.; Parsons, D. R.

    2004-05-01

    Research undertaken over the last few years has demonstrated the importance of the structure of gravel river beds for understanding the interaction between fluid flow and sediment transport processes. This includes the observation of periodic high-speed fluid wedges interconnected by low-speed flow regions. Our understanding of these flows has been enhanced significantly through a series of laboratory experiments and supported by field observations. However, the potential of high resolution three dimensional Computational Fluid Dynamics (CFD) modeling has yet to be fully developed. This is largely the result of the problems of designing numerically stable meshes for use with complex bed topographies and that Reynolds averaged turbulence schemes are applied. This paper develops two novel techniques for dealing with these issues. The first is the development and validation of a method for representing the complex surface topography of gravel-bed rivers in high resolution three-dimensional computational fluid dynamic models. This is based upon a porosity treatment with a regular structured grid and the application of a porosity modification to the mass conservation equation in which: fully blocked cells are assigned a porosity of zero; fully unblocked cells are assigned a porosity of one; and partly blocked cells are assigned a porosity of between 0 and 1, according to the percentage of the cell volume that is blocked. The second is the application of Large Eddy Simulation (LES) which enables time dependent flow structures to be numerically predicted over the complex bed topographies. The regular structured grid with the embedded porosity algorithm maintains a constant grid cell size throughout the domain implying a constant filter scale for the LES simulation. This enables the prediction of coherent structures, repetitive quasi-cyclic large-scale turbulent motions, over the gravel surface which are of a similar magnitude and frequency to those previously observed in

  16. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  17. The Evolvement of Automobile Steering System Based on TRIZ

    NASA Astrophysics Data System (ADS)

    Zhao, Xinjun; Zhang, Shuang

    Products and techniques pass through a process of birth, growth, maturity, death and quit the stage like biological evolution process. The developments of products and techniques conform to some evolvement rules. If people know and hold these rules, they can design new kind of products and forecast the develop trends of the products. Thereby, enterprises can grasp the future technique directions of products, and make product and technique innovation. Below, based on TRIZ theory, the mechanism evolvement, the function evolvement and the appearance evolvement of automobile steering system had been analyzed and put forward some new ideas about future automobile steering system.

  18. A numerical determination of the evolution of cloud drop spectra due to condensation on natural aerosol particles

    NASA Technical Reports Server (NTRS)

    Lee, I. Y.; Haenel, G.; Pruppacher, H. R.

    1980-01-01

    The time variation in size of aerosol particles growing by condensation is studied numerically by means of an air parcel model which allows entrainment of air and aerosol particles. Particles of four types of aerosols typically occurring in atmospheric air masses were considered. The present model circumvents any assumption about the size distribution and chemical composition of the aerosol particles by basing the aerosol particle growth on actually observed size distributions and on observed amounts of water taken up under equilibrium by a deposit of the aerosol particles. Characteristic differences in the drop size distribution, liquid water content and supersaturation were found for the clouds which evolved from the four aerosol types considered.

  19. Numerical simulation of time delay interferometry for a LISA-like mission with the simplification of having only one interferometer

    NASA Astrophysics Data System (ADS)

    Dhurandhar, S. V.; Ni, W.-T.; Wang, G.

    2013-01-01

    In order to attain the requisite sensitivity for LISA, laser frequency noise must be suppressed below the secondary noises such as the optical path noise, acceleration noise etc. In a previous paper (Dhurandhar, S.V., Nayak, K.R., Vinet, J.-Y. Time delay interferometry for LISA with one arm dysfunctional. Class. Quantum Grav. 27, 135013, 2010), we have found a large family of second-generation analytic solutions of time delay interferometry with one arm dysfunctional, and we also estimated the laser noise due to residual time-delay semi-analytically from orbit perturbations due to Earth. Since other planets and solar-system bodies also perturb the orbits of LISA spacecraft and affect the time delay interferometry (TDI), we simulate the time delay numerically in this paper for all solutions with the generation number n ⩽ 3. We have worked out a set of 3-year optimized mission orbits of LISA spacecraft starting at January 1, 2021 using the CGC2.7 ephemeris framework. We then use this numerical solution to calculate the residual optical path differences in the second-generation solutions of our previous paper, and compare with the semi-analytic error estimate. The accuracy of this calculation is better than 1 cm (or 30 ps). The maximum path length difference, for all configuration calculated, is below 1 m (3 ns). This is well below the limit under which the laser frequency noise is required to be suppressed. The numerical simulation in this paper can be applied to other space-borne interferometers for gravitational wave detection with the simplification of having only one interferometer.

  20. NASA's Space Launch System: An Evolving Capability for Exploration An Evolving Capability for Exploration

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.; Crumbly, Christopher M.; Robinson, Kimerly F.

    2016-01-01

    A foundational capability for international human deep-space exploration, NASA's Space Launch System (SLS) vehicle represents a new spaceflight infrastructure asset, creating opportunities for mission profiles and space systems that cannot currently be executed. While the primary purpose of SLS, which is making rapid progress towards initial launch readiness in two years, will be to support NASA's Journey to Mars, discussions are already well underway regarding other potential utilization of the vehicle's unique capabilities. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS is capable of propelling the Orion crew vehicle to cislunar space, while also delivering small CubeSat-class spacecraft to deep-space destinations. With the addition of a more powerful upper stage, the Block 1B configuration of SLS will be able to deliver 105 t to LEO and enable more ambitious human missions into the proving ground of space. This configuration offers opportunities for launching co-manifested payloads with the Orion crew vehicle, and a class of secondary payloads, larger than today's CubeSats. Further upgrades to the vehicle, including advanced boosters, will evolve its performance to 130 t in its Block 2 configuration. Both Block 1B and Block 2 also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle. With unmatched mass-lift capability, payload volume, and C3, SLS not only enables spacecraft or mission designs currently impossible with contemporary EELVs, it also offers enhancing benefits, such as reduced risk, operational costs and/or complexity, shorter transit time to destination or launching large systems either monolithically or in fewer components. This paper will discuss both the performance and capabilities of Space Launch System as it evolves, and the current state of SLS utilization planning.

  1. Highly evolved rhyolitic glass compositions from the Toba Caldera, Sumatra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesner, C.A.

    1985-01-01

    The quartz latite to rhyolitic ash flow tuffs erupted form the Toba Caldera, perhaps the largest caldera on earth (100 by 30 kms), provide the unique opportunity to study a highly differentiated liquid in equilibrium with numerous mineral phases. Not only are the rocks very crystal rich (30-50%), but at present a minimum of 15 co-existing mineral phases have been identified. Both whole-rock and glass analyses were made by XRF techniques providing data on both major and trace elements. Whole rock chemistry of individual pumices from the youngest eruption at Toba (75,000 years ago), are suggestive of the eruption ofmore » two magma compositions across a boundary layer in the magma chamber. Glass chemistry of the pumices also show two distinct liquid compositions. The more silicic pumices, which have the most evolved glass compositions, are similar to the whole rock chemistry of the few aplitic pumices and cognate granitic xenoliths that were collected. This highly evolved composition resulted from the removal of up to 15 mineral phases and may be a fractionation buffered, univariant composition. The glasses from the less silicic pumices are similar to the whole rock chemistry of the more silicic pumice, thus falling nicely on a fractionation trend towards the univariant composition for these rocks. This set of glass compositions allows an independent test for the origin of distal ashes thought to have erupted from Toba and deposited in Malaysia, the Indian Ocean, and as far away as India.« less

  2. Mentoring: An Evolving Relationship.

    PubMed

    Block, Michelle; Florczak, Kristine L

    2017-04-01

    The column concerns itself with mentoring as an evolving relationship between mentor and mentee. The collegiate mentoring model, the transformational transcendence model, and the humanbecoming mentoring model are considered in light of a dialogue with mentors at a Midwest university and conclusions are drawn.

  3. Who is teaching what, when? An evolving online tool to manage dental curricula.

    PubMed

    Walton, Joanne N

    2014-03-01

    There are numerous issues in the documentation and ongoing development of health professions curricula. It seems that curriculum information falls quickly out of date between accreditation cycles, while students and faculty members struggle in the meantime with the "hidden curriculum" and unintended redundancies and gaps. Beyond knowing what is in the curriculum lies the frustration of timetabling learning in a transparent way while allowing for on-the-fly changes and improvements. The University of British Columbia Faculty of Dentistry set out to develop a curriculum database to answer the simple but challenging question "who is teaching what, when?" That tool, dubbed "OSCAR," has evolved to not only document the dental curriculum, but as a shared instrument that also holds the curricula and scheduling detail of the dental hygiene degree and clinical graduate programs. In addition to providing documentation ranging from reports for accreditation to daily information critical to faculty administrators and staff, OSCAR provides faculty and students with individual timetables and pushes updates via text, email, and calendar changes. It incorporates reminders and session resources for students and can be updated by both faculty members and staff. OSCAR has evolved into an essential tool for tracking, scheduling, and improving the school's curricula.

  4. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    NASA Technical Reports Server (NTRS)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  5. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  6. A study of volatile organic compounds evolved from the decaying human body.

    PubMed

    Statheropoulos, M; Spiliopoulou, C; Agapiou, A

    2005-10-29

    Two men were found dead near the island of Samos, Greece, in the Mediterranean sea. The estimated time of death for both victims was 3-4 weeks. Autopsy revealed no remarkable external injuries or acute poisoning. The exact cause of death remained unclear because the bodies had advanced decomposition. Volatile organic compounds (VOCs) evolved from these two corpses were determined by thermal desorption/gas chromatography/mass spectrometry analysis (TD/GC/MS). Over 80 substances have been identified and quantified. The most prominent among them were dimethyl disulfide (13.39 nmol/L), toluene (10.11 nmol/L), hexane (5.58 nmol/L), benzene 1,2,4-trimethyl (4.04 nmol/L), 2-propanone (3.84 nmol/L), 3-pentanone (3.59 nmol/L). Qualitative and quantitative differences among the evolved VOCs and CO2 mean concentration values might indicate different rates of decomposition between the two bodies. The study of the evolved VOCs appears to be a promising adjunct to the forensic pathologist as they may offer important information which can be used in his final evaluation.

  7. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.

    2007-09-15

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less

  8. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  9. Numerical investigation of coupled density-driven flow and hydrogeochemical processes below playas

    NASA Astrophysics Data System (ADS)

    Hamann, Enrico; Post, Vincent; Kohfahl, Claus; Prommer, Henning; Simmons, Craig T.

    2015-11-01

    Numerical modeling approaches with varying complexity were explored to investigate coupled groundwater flow and geochemical processes in saline basins. Long-term model simulations of a playa system gain insights into the complex feedback mechanisms between density-driven flow and the spatiotemporal patterns of precipitating evaporites and evolving brines. Using a reactive multicomponent transport model approach, the simulations reproduced, for the first time in a numerical study, the evaporite precipitation sequences frequently observed in saline basins ("bull's eyes"). Playa-specific flow, evapoconcentration, and chemical divides were found to be the primary controls for the location of evaporites formed, and the resulting brine chemistry. Comparative simulations with the computationally far less demanding surrogate single-species transport models showed that these were still able to replicate the major flow patterns obtained by the more complex reactive transport simulations. However, the simulated degree of salinization was clearly lower than in reactive multicomponent transport simulations. For example, in the late stages of the simulations, when the brine becomes halite-saturated, the nonreactive simulation overestimated the solute mass by almost 20%. The simulations highlight the importance of the consideration of reactive transport processes for understanding and quantifying geochemical patterns, concentrations of individual dissolved solutes, and evaporite evolution.

  10. Experimental and numerical investigations of temporally and spatially periodic modulated wave trains

    NASA Astrophysics Data System (ADS)

    Houtani, H.; Waseda, T.; Tanizawa, K.

    2018-03-01

    A number of studies on steep nonlinear waves were conducted experimentally with the temporally periodic and spatially evolving (TPSE) wave trains and numerically with the spatially periodic and temporally evolving (SPTE) ones. The present study revealed that, in the vicinity of their maximum crest height, the wave profiles of TPSE and SPTE modulated wave trains resemble each other. From the investigation of the Akhmediev-breather solution of the nonlinear Schrödinger equation (NLSE), it is revealed that the dispersion relation deviated from the quadratic dependence of frequency on wavenumber and became linearly dependent instead. Accordingly, the wave profiles of TPSE and SPTE breathers agree. The range of this agreement is within the order of one wave group of the maximum crest height and persists during the long-term evolution. The findings extend well beyond the NLSE regime and can be applied to modulated wave trains that are highly nonlinear and broad-banded. This was demonstrated from the numerical wave tank simulations with a fully nonlinear potential flow solver based on the boundary element method, in combination with the nonlinear wave generation method based on the prior simulation with the higher-order spectral model. The numerical wave tank results were confirmed experimentally in a physical wave tank. The findings of this study unravel the fundamental nature of the nonlinear wave evolution. The deviation of the dispersion relation of the modulated wave trains occurs because of the nonlinear phase variation due to quasi-resonant interaction, and consequently, the wave geometry of temporally and spatially periodic modulated wave trains coincides.

  11. Time domain nonlinear SMA damper force identification approach and its numerical validation

    NASA Astrophysics Data System (ADS)

    Xin, Lulu; Xu, Bin; He, Jia

    2012-04-01

    Most of the currently available vibration-based identification approaches for structural damage detection are based on eigenvalues and/or eigenvectors extracted from vibration measurements and, strictly speaking, are only suitable for linear system. However, the initiation and development of damage in engineering structures under severe dynamic loadings are typical nonlinear procedure. Studies on the identification of restoring force which is a direct indicator of the extent of the nonlinearity have received increasing attention in recent years. In this study, a date-based time domain identification approach for general nonlinear system was developed. The applied excitation and the corresponding response time series of the structure were used for identification by means of standard least-square techniques and a power series polynomial model (PSPM) which was utilized to model the nonlinear restoring force (NRF). The feasibility and robustness of the proposed approach was verified by a 2 degree-of-freedoms (DOFs) lumped mass numerical model equipped with a shape memory ally (SMA) damper mimicking nonlinear behavior. The results show that the proposed data-based time domain method is capable of identifying the NRF in engineering structures without any assumptions on the mass distribution and the topology of the structure, and provides a promising way for damage detection in the presence of structural nonlinearities.

  12. Orbital Decay in Binaries with Evolved Stars

    NASA Astrophysics Data System (ADS)

    Sun, Meng; Arras, Phil; Weinberg, Nevin N.; Troup, Nicholas; Majewski, Steven R.

    2018-01-01

    Two mechanisms are often invoked to explain tidal friction in binary systems. The ``dynamical tide” is the resonant excitation of internal gravity waves by the tide, and their subsequent damping by nonlinear fluid processes or thermal diffusion. The ``equilibrium tide” refers to non-resonant excitation of fluid motion in the star’s convection zone, with damping by interaction with the turbulent eddies. There have been numerous studies of these processes in main sequence stars, but less so on the subgiant and red giant branches. Motivated by the newly discovered close binary systems in the Apache Point Observatory Galactic Evolution Experiment (APOGEE-1), we have performed calculations of both the dynamical and equilibrium tide processes for stars over a range of mass as the star’s cease core hydrogen burning and evolve to shell burning. Even for stars which had a radiative core on the main sequence, the dynamical tide may have very large amplitude in the newly radiative core in post-main sequence, giving rise to wave breaking. The resulting large dynamical tide dissipation rate is compared to the equilibrium tide, and the range of secondary masses and orbital periods over which rapid orbital decay may occur will be discussed, as well as applications to close APOGEE binaries.

  13. Cyberspace Operations: Influence Upon Evolving War Theory

    DTIC Science & Technology

    2011-03-18

    St ra te gy R es ea rc h Pr oj ec t CYBERSPACE OPERATIONS: INFLUENCE UPON EVOLVING WAR THEORY BY COLONEL KRISTIN BAKER United States...DATES COVERED (From - To) 4. TITLE AND SUBTITLE Cyberspace Operations: Influence Upon Evolving War Theory 5a. CONTRACT NUMBER... Leadership 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S

  14. Methods Evolved by Observation

    ERIC Educational Resources Information Center

    Montessori, Maria

    2016-01-01

    Montessori's idea of the child's nature and the teacher's perceptiveness begins with amazing simplicity, and when she speaks of "methods evolved," she is unveiling a methodological system for observation. She begins with the early childhood explosion into writing, which is a familiar child phenomenon that Montessori has written about…

  15. Insights into the Sulfur Mineralogy of Martian Soil at Rocknest, Gale Crater, Enabled by Evolved Gas Analyses

    NASA Technical Reports Server (NTRS)

    McAdam, A.; Franz, H.; Archer, P., Jr.; Freissinet, C.; Sutter, B.; Glavin, D.; Eigenbrode, J.; Bower, H.; Stern, J.; Mchaffy, P.; hide

    2013-01-01

    The first solid samples analysed by the Chemistry and Mineralogy (CheMin) instrument and Sample Analysis at Mars (SAM) instrument suite on the Mars Science Laboratory (MSL) consisted of < 150 m fines sieved from aeolian bedform material at a site named Rocknest. All four samples of this material analyzed by SAM s evolved gas analysis mass spectrometry (EGA-MS) released H2O, CO2, O2, and SO2 (Fig. 1), as well as H2S and possibly NO. This is the first time evolved SO2 (and evolved H2S) has been detected from thermal analysis of martian materials. The identity of these evolved gases and temperature (T) of evolution can support mineral detection by CheMin and place constraints on trace volatile-bearing phases present below the CheMin detection limit or difficult to characterize with XRD (e.g., X-ray amorphous phases). Constraints on phases responsible for evolved CO2 and O2 are detailed elsewhere [1,2,3]. Here, we focus on potential constraints on phases that evolved SO2, H2S, and H2O during thermal analysis.

  16. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  17. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <time, with a rate, Lambda, that exhibits a direct correlation with the value of the Liapounov exponent, chi. However, this near-invariant distribution does not correspond to the true invariant measure. If this distribution be evolved for much longer time-scales, one sees systematic evolutionary effects associated with diffusion through cantori, which on short time-scales divide stochastic orbits into two distinct classes, namely confined and unconfined. For the deterministic simulations described herein, the time-scale for this diffusion is >>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  18. Regolith Evolved Gas Analyzer

    NASA Technical Reports Server (NTRS)

    Hoffman, John H.; Hedgecock, Jud; Nienaber, Terry; Cooper, Bonnie; Allen, Carlton; Ming, Doug

    2000-01-01

    The Regolith Evolved Gas Analyzer (REGA) is a high-temperature furnace and mass spectrometer instrument for determining the mineralogical composition and reactivity of soil samples. REGA provides key mineralogical and reactivity data that is needed to understand the soil chemistry of an asteroid, which then aids in determining in-situ which materials should be selected for return to earth. REGA is capable of conducting a number of direct soil measurements that are unique to this instrument. These experimental measurements include: (1) Mass spectrum analysis of evolved gases from soil samples as they are heated from ambient temperature to 900 C; and (2) Identification of liberated chemicals, e.g., water, oxygen, sulfur, chlorine, and fluorine. REGA would be placed on the surface of a near earth asteroid. It is an autonomous instrument that is controlled from earth but does the analysis of regolith materials automatically. The REGA instrument consists of four primary components: (1) a flight-proven mass spectrometer, (2) a high-temperature furnace, (3) a soil handling system, and (4) a microcontroller. An external arm containing a scoop or drill gathers regolith samples. A sample is placed in the inlet orifice where the finest-grained particles are sifted into a metering volume and subsequently moved into a crucible. A movable arm then places the crucible in the furnace. The furnace is closed, thereby sealing the inner volume to collect the evolved gases for analysis. Owing to the very low g forces on an asteroid compared to Mars or the moon, the sample must be moved from inlet to crucible by mechanical means rather than by gravity. As the soil sample is heated through a programmed pattern, the gases evolved at each temperature are passed through a transfer tube to the mass spectrometer for analysis and identification. Return data from the instrument will lead to new insights and discoveries including: (1) Identification of the molecular masses of all of the gases

  19. Evolving artificial metalloenzymes via random mutagenesis

    NASA Astrophysics Data System (ADS)

    Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.

    2018-03-01

    Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.

  20. Coevolution of bed surface patchiness and channel morphology: 2. Numerical experiments

    USGS Publications Warehouse

    Nelson, Peter A.; McDonald, Richard R.; Nelson, Jonathan M.; Dietrich, William E.

    2015-01-01

    In gravel bed rivers, bed topography and the bed surface grain size distribution evolve simultaneously, but it is not clear how feedbacks between topography and grain sorting affect channel morphology. In this, the second of a pair of papers examining interactions between bed topography and bed surface sorting in gravel bed rivers, we use a two-dimensional morphodynamic model to perform numerical experiments designed to explore the coevolution of both free and forced bars and bed surface patches. Model runs were carried out on a computational grid simulating a 200 m long, 2.75 m wide, straight, rectangular channel, with an initially flat bed at a slope of 0.0137. Over five numerical experiments, we varied (a) whether an obstruction was present, (b) whether the sediment was a gravel mixture or a single size, and (c) whether the bed surface grain size feeds back on the hydraulic roughness field. Experiments with channel obstructions developed a train of alternate bars that became stationary and were connected to the obstruction. Freely migrating alternate bars formed in the experiments without channel obstructions. Simulations incorporating roughness feedbacks between the bed surface and flow field produced flatter, broader, and longer bars than simulations using constant roughness or uniform sediment. Our findings suggest that patches are not simply a by-product of bed topography, but they interact with the evolving bed and influence morphologic evolution.

  1. The evolving Planck mass in classically scale-invariant theories

    NASA Astrophysics Data System (ADS)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.

    2017-04-01

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  2. Observations and Numerical Modeling of the Jovian Ribbon

    NASA Technical Reports Server (NTRS)

    Cosentino, R. G.; Simon, A.; Morales-Juberias, R.; Sayanagi, K. M.

    2015-01-01

    Multiple wavelength observations made by the Hubble Space Telescope in early 2007 show the presence of a wavy, high-contrast feature in Jupiter's atmosphere near 30 degrees North. The "Jovian Ribbon," best seen at 410 nanometers, irregularly undulates in latitude and is time-variable in appearance. A meridional intensity gradient algorithm was applied to the observations to track the Ribbon's contour. Spectral analysis of the contour revealed that the Ribbon's structure is a combination of several wavenumbers ranging from k equals 8-40. The Ribbon is a dynamic structure that has been observed to have spectral power for dominant wavenumbers which vary over a time period of one month. The presence of the Ribbon correlates with periods when the velocity of the westward jet at the same location is highest. We conducted numerical simulations to investigate the stability of westward jets of varying speed, vertical shear, and background static stability to different perturbations. A Ribbon-like morphology was best reproduced with a 35 per millisecond westward jet that decreases in amplitude for pressures greater than 700 hectopascals and a background static stability of N equals 0.005 per second perturbed by heat pulses constrained to latitudes south of 30 degrees North. Additionally, the simulated feature had wavenumbers that qualitatively matched observations and evolved throughout the simulation reproducing the Jovian Ribbon's dynamic structure.

  3. Polarization and studies of evolved star mass loss

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin; Srinivasan, Sundar; Riebel, David; Meixner, Margaret

    2012-05-01

    Polarization studies of astronomical dust have proven very useful in constraining its properties. Such studies are used to constrain the spatial arrangement, shape, composition, and optical properties of astronomical dust grains. Here we explore possible connections between astronomical polarization observations to our studies of mass loss from evolved stars. We are studying evolved star mass loss in the Large Magellanic Cloud (LMC) by using photometry from the Surveying the Agents of a Galaxy's Evolution (SAGE; PI: M. Meixner) Spitzer Space Telescope Legacy program. We use the radiative transfer program 2Dust to create our Grid of Red supergiant and Asymptotic giant branch ModelS (GRAMS), in order to model this mass loss. To model emission of polarized light from evolved stars, however, we appeal to other radiative transfer codes. We probe how polarization observations might be used to constrain the dust shell and dust grain properties of the samples of evolved stars we are studying.

  4. Numerical studies of the KP line-solitons

    NASA Astrophysics Data System (ADS)

    Chakravarty, S.; McDowell, T.; Osborne, M.

    2017-03-01

    The Kadomtsev-Petviashvili (KP) equation admits a class of solitary wave solutions localized along distinct rays in the xy-plane, called the line-solitons, which describe the interaction of shallow water waves on a flat surface. These wave interactions have been observed on long, flat beaches, as well as have been recreated in laboratory experiments. In this paper, the line-solitons are investigated via direct numerical simulations of the KP equation, and the interactions of the evolved solitary wave patterns are studied. The objective is to obtain greater insight into solitary wave interactions in shallow water and to determine the extent the KP equation is a good model in describing these nonlinear interactions.

  5. The evolution of resource adaptation: how generalist and specialist consumers evolve.

    PubMed

    Ma, Junling; Levin, Simon A

    2006-07-01

    Why and how specialist and generalist strategies evolve are important questions in evolutionary ecology. In this paper, with the method of adaptive dynamics and evolutionary branching, we identify conditions that select for specialist and generalist strategies. Generally, generalist strategies evolve if there is a switching benefit; specialists evolve if there is a switching cost. If the switching cost is large, specialists always evolve. If the switching cost is small, even though the consumer will first evolve toward a generalist strategy, it will eventually branch into two specialists.

  6. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  7. How People Interact in Evolving Online Affiliation Networks

    NASA Astrophysics Data System (ADS)

    Gallos, Lazaros K.; Rybski, Diego; Liljeros, Fredrik; Havlin, Shlomo; Makse, Hernán A.

    2012-07-01

    The study of human interactions is of central importance for understanding the behavior of individuals, groups, and societies. Here, we observe the formation and evolution of networks by monitoring the addition of all new links, and we analyze quantitatively the tendencies used to create ties in these evolving online affiliation networks. We show that an accurate estimation of these probabilistic tendencies can be achieved only by following the time evolution of the network. Inferences about the reason for the existence of links using statistical analysis of network snapshots must therefore be made with great caution. Here, we start by characterizing every single link when the tie was established in the network. This information allows us to describe the probabilistic tendencies of tie formation and extract meaningful sociological conclusions. We also find significant differences in behavioral traits in the social tendencies among individuals according to their degree of activity, gender, age, popularity, and other attributes. For instance, in the particular data sets analyzed here, we find that women reciprocate connections 3 times as much as men and that this difference increases with age. Men tend to connect with the most popular people more often than women do, across all ages. On the other hand, triangular tie tendencies are similar, independent of gender, and show an increase with age. These results require further validation in other social settings. Our findings can be useful to build models of realistic social network structures and to discover the underlying laws that govern establishment of ties in evolving social networks.

  8. Children’s Mapping between Non-Symbolic and Symbolic Numerical Magnitudes and Its Association with Timed and Untimed Tests of Mathematics Achievement

    PubMed Central

    Brankaer, Carmen; Ghesquière, Pol; De Smedt, Bert

    2014-01-01

    The ability to map between non-symbolic numerical magnitudes and Arabic numerals has been put forward as a key factor in children’s mathematical development. This mapping ability has been mainly examined indirectly by looking at children’s performance on a symbolic magnitude comparison task. The present study investigated mapping in a more direct way by using a task in which children had to choose which of two choice quantities (Arabic digits or dot arrays) matched the target quantity (dot array or Arabic digit), thereby focusing on small quantities ranging from 1 to 9. We aimed to determine the development of mapping over time and its relation to mathematics achievement. Participants were 36 first graders (M = 6 years 8 months) and 46 third graders (M = 8 years 8 months) who all completed mapping tasks, symbolic and non-symbolic magnitude comparison tasks and standardized timed and untimed tests of mathematics achievement. Findings revealed that children are able to map between non-symbolic and symbolic representations and that this mapping ability develops over time. Moreover, we found that children’s mapping ability is related to timed and untimed measures of mathematics achievement, over and above the variance accounted for by their numerical magnitude comparison skills. PMID:24699664

  9. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  10. Direct numerical simulation of sheared turbulent flow

    NASA Technical Reports Server (NTRS)

    Harris, Vascar G.

    1994-01-01

    The summer assignment to study sheared turbulent flow was divided into three phases which were: (1) literature survey, (2) computational familiarization, and (3) pilot computational studies. The governing equations of fluid dynamics or Navier-Stokes equations describe the velocity, pressure, and density as functions of position and time. In principle, when combined with conservation equations for mass, energy, and thermodynamic state of the fluid a determinate system could be obtained. In practice the Navier-Stokes equations have not been solved due to the nonlinear nature and complexity of these equations. Consequently, the importance of experiments in gaining insight for understanding the physics of the problem has been an ongoing process. Reasonable computer simulations of the problem have occured as the computational speed and storage of computers has evolved. The importance of the microstructure of the turbulence dictates the need for high resolution grids in extracting solutions which contain the physical mechanisms which are essential to a successful simulation. The recognized breakthrough occurred as a result of the pioneering work of Orzag and Patterson in which the Navier-Stokes equations were solved numerically utilizing a time saving toggling technique between physical and wave space, known as a spectral method. An equally analytically unsolvable problem, containing the same quasi-chaotic nature as turbulence, is known as the three body problem which was studied computationally as a first step this summer. This study was followed by computations of a two dimensional (2D) free shear layer.

  11. Evolving fuzzy rules for relaxed-criteria negotiation.

    PubMed

    Sim, Kwang Mong

    2008-12-01

    In the literature on automated negotiation, very few negotiation agents are designed with the flexibility to slightly relax their negotiation criteria to reach a consensus more rapidly and with more certainty. Furthermore, these relaxed-criteria negotiation agents were not equipped with the ability to enhance their performance by learning and evolving their relaxed-criteria negotiation rules. The impetus of this work is designing market-driven negotiation agents (MDAs) that not only have the flexibility of relaxing bargaining criteria using fuzzy rules, but can also evolve their structures by learning new relaxed-criteria fuzzy rules to improve their negotiation outcomes as they participate in negotiations in more e-markets. To this end, an evolutionary algorithm for adapting and evolving relaxed-criteria fuzzy rules was developed. Implementing the idea in a testbed, two kinds of experiments for evaluating and comparing EvEMDAs (MDAs with relaxed-criteria rules that are evolved using the evolutionary algorithm) and EMDAs (MDAs with relaxed-criteria rules that are manually constructed) were carried out through stochastic simulations. Empirical results show that: 1) EvEMDAs generally outperformed EMDAs in different types of e-markets and 2) the negotiation outcomes of EvEMDAs generally improved as they negotiated in more e-markets.

  12. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  13. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means

  14. Surrogate-assisted identification of influences of network construction on evolving weighted functional networks

    NASA Astrophysics Data System (ADS)

    Stahn, Kirsten; Lehnertz, Klaus

    2017-12-01

    We aim at identifying factors that may affect the characteristics of evolving weighted networks derived from empirical observations. To this end, we employ various chains of analysis that are often used in field studies for a data-driven derivation and characterization of such networks. As an example, we consider fully connected, weighted functional brain networks before, during, and after epileptic seizures that we derive from multichannel electroencephalographic data recorded from epilepsy patients. For these evolving networks, we estimate clustering coefficient and average shortest path length in a time-resolved manner. Lastly, we make use of surrogate concepts that we apply at various levels of the chain of analysis to assess to what extent network characteristics are dominated by properties of the electroencephalographic recordings and/or the evolving weighted networks, which may be accessible more easily. We observe that characteristics are differently affected by the unavoidable referencing of the electroencephalographic recording, by the time-series-analysis technique used to derive the properties of network links, and whether or not networks were normalized. Importantly, for the majority of analysis settings, we observe temporal evolutions of network characteristics to merely reflect the temporal evolutions of mean interaction strengths. Such a property of the data may be accessible more easily, which would render the weighted network approach—as used here—as an overly complicated description of simple aspects of the data.

  15. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults

    PubMed Central

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3MJ and a 6.3MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers. PMID:26230392

  16. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults.

    PubMed

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3 MJ and a 6.3 MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers.

  17. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  18. Seasonal variation of residence time in spring and groundwater evaluated by CFCs and numerical simulation in mountainous headwater catchment

    NASA Astrophysics Data System (ADS)

    Tsujimura, Maki; Watanabe, Yasuto; Ikeda, Koichi; Yano, Shinjiro; Abe, Yutaka

    2016-04-01

    Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, additionally time information of the water is principal to understand hydrological processes in the catchments. However, there have been few researches to evaluate variation of residence time of subsurface water in time and space at the mountainous headwaters especially with steep slope. We investigated the temporal variation of the residence time of the spring and groundwater with tracing of hydrological flow processes in mountainous catchments underlain by granite, Yamanashi Prefecture, central Japan. We conducted intensive hydrological monitoring and water sampling of spring, stream and ground waters in high-flow and low-flow seasons from 2008 through 2013 in River Jingu Watershed underlain by granite, with an area of approximately 15 km2 and elevation ranging from 950 m to 2000 m. The CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute constituent concentrations were determined on all water samples. Also, a numerical simulation was conducted to reproduce of the average residence times of the spring and groundwater. The residence time of the spring water estimated by the CFCs concentration ranged from 10 years to 60 years in space within the watershed, and it was higher (older) during the low flow season and lower (younger) during the high flow season. We tried to reproduce the seasonal change of the residence time in the spring water by numerical simulation, and the calculated residence time of the spring water and discharge of the stream agreed well with the observed values. The groundwater level was higher during the high flow season and the groundwater dominantly flowed through the weathered granite with higher permeability, whereas that was lower during the low flow season and that flowed dominantly through the fresh granite with lower permeability. This caused the seasonal variation of the residence time of the spring

  19. A NUMERICAL SCHEME FOR SPECIAL RELATIVISTIC RADIATION MAGNETOHYDRODYNAMICS BASED ON SOLVING THE TIME-DEPENDENT RADIATIVE TRANSFER EQUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohsuga, Ken; Takahashi, Hiroyuki R.

    2016-02-20

    We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less

  20. Another self-similar blast wave: Early time asymptote with shock heated electrons and high thermal conductivity

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Edgar, R. J.

    1982-01-01

    Accurate approximations are presented for the self-similar structures of nonradiating blast waves with adiabatic ions, isothermal electrons, and equation ion and electron temperatures at the shock. The cases considered evolve in cavities with power law ambient densities (including the uniform density case) and have negligible external pressure. The results provide the early time asymptote for systems with shock heating of electrons and strong thermal conduction. In addition, they provide analytical results against which two fluid numerical hydrodynamic codes can be checked.

  1. Numerical Tests of the Cosmic Censorship Conjecture with Collisionless Matter Collapse

    NASA Astrophysics Data System (ADS)

    Okounkova, Maria; Hemberger, Daniel; Scheel, Mark

    2016-03-01

    We present our results of numerical tests of the weak cosmic censorship conjecture (CCC), which states that generically, singularities of gravitational collapse are hidden within black holes, and the hoop conjecture, which states that black holes form when and only when a mass M gets compacted into a region whose circumference in every direction is C <= 4 πM . We built a smooth particle methods module in SpEC, the Spectral Einstein Code, to simultaneously evolve spacetime and collisionless matter configurations. We monitor RabcdRabcd for singularity formation, and probe for the existence of apparent horizons. We include in our simulations the prolate spheroid configurations considered in Shapiro and Teukolsky's 1991 numerical study of the CCC. This research was partially supported by the Dominic Orr Fellowship at Caltech.

  2. Numerical models of jet disruption in cluster cooling flows

    NASA Technical Reports Server (NTRS)

    Loken, Chris; Burns, Jack O.; Roettiger, Kurt; Norman, Mike

    1993-01-01

    We present a coherent picture for the formation of the observed diverse radio morphological structures in dominant cluster galaxies based on the jet Mach number. Realistic, supersonic, steady-state cooling flow atmospheres are evolved numerically and then used as the ambient medium through which jets of various properties are propagated. Low Mach number jets effectively stagnate due to the ram pressure of the cooling flow atmosphere while medium Mach number jets become unstable and disrupt in the cooling flow to form amorphous structures. High Mach number jets manage to avoid disruption and are able to propagate through the cooling flow.

  3. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  4. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  5. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  6. Numerical simulations of relativistic heavy-ion reactions

    NASA Astrophysics Data System (ADS)

    Daffin, Frank Cecil

    Bulk quantities of nuclear matter exist only in the compact bodies of the universe. There the crushing gravitational forces overcome the Coulomb repulsion in massive stellar collapses. Nuclear matter is subjected to high pressures and temperatures as shock waves propagate and burn their way through stellar cores. The bulk properties of nuclear matter are important parameters in the evolution of these collapses, some of which lead to nucleosynthesis. The nucleus is rich in physical phenomena. Above the Coulomb barrier, complex interactions lead to the distortion of, and as collision energies increase, the destruction of the nuclear volume. Of critical importance to the understanding of these events is an understanding of the aggregate microscopic processes which govern them. In an effort to understand relativistic heavy-ion reactions, the Boltzmann-Uehling-Uhlenbeck (Ueh33) (BUU) transport equation is used as the framework for a numerical model. In the years since its introduction, the numerical model has been instrumental in providing a coherent, microscopic, physical description of these complex, highly non-linear events. This treatise describes the background leading to the creation of our numerical model of the BUU transport equation, details of its numerical implementation, its application to the study of relativistic heavy-ion collisions, and some of the experimental observables used to compare calculated results to empirical results. The formalism evolves the one-body Wigner phase-space distribution of nucleons in time under the influence of a single-particle nuclear mean field interaction and a collision source term. This is essentially the familiar Boltzmann transport equation whose source term has been modified to address the Pauli exclusion principle. Two elements of the model allow extrapolation from the study of nuclear collisions to bulk quantities of nuclear matter: the modification of nucleon scattering cross sections in nuclear matter, and the

  7. TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-04-01

    Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.

  8. Numerical analysis of the photo-injection time-of-flight curves in molecularly doped polymers

    NASA Astrophysics Data System (ADS)

    Tyutnev, A. P.; Ikhsanov, R. Sh.; Saenko, V. S.; Nikerov, D. V.

    2018-03-01

    We have performed numerical analysis of the charge carrier transport in a specific molecularly doped polymer using the multiple trapping model. The computations covered a wide range of applied electric fields, temperatures and most importantly, of the initial energies of photo injected one-sign carriers (in our case, holes). Special attention has been given to comparison of time of flight curves measured by the photo-injection and radiation-induced techniques which has led to a problematic situation concerning an interpretation of the experimental data. Computational results have been compared with both analytical and experimental results available in literature.

  9. An artemisinin-mediated ROS evolving and dual protease light-up nanocapsule for real-time imaging of lysosomal tumor cell death.

    PubMed

    Huang, Liwei; Luo, Yingping; Sun, Xian; Ju, Huangxian; Tian, Jiangwei; Yu, Bo-Yang

    2017-06-15

    Lysosomes are critical organelles for cellular homeostasis and can be used as potential targets to kill tumor cells from inside. Many photo-therapeutic methods have been developed to overproduce reactive oxygen species (ROS) to trigger lysosomal membrane permeabilization (LMP)-associated cell death pathway. However, these technologies rely on extra irradiation to activate the photosensitizers, which limits the applications in treating deep seated tumors and widespread metastatic lesions. This work reports a multifunctional nanocapsule to achieve targeted lysosomal tumor cell death without irradiation and real-time monitoring of drug effect through encapsulating artemisinin and dual protease light-up nanoprobe in a folate-functionalized liposome. The nanocapsule can be specifically uptaken by tumor cells via folate receptor-mediated endocytosis to enter lysosomes, in which artemisinin reacts with ferrous to generate ROS for LMP-associated cell death. By virtue of confocal fluorescence imaging, the artemisinin location in lysosome, ROS-triggered LMP and ultimate cell apoptosis can be visualized with the cathepsin B and caspase-3 activatable nanoprobe. Notably, the artemisinin-mediated ROS evolving for tumor therapy and real-time therapeutic monitoring were successfully implemented by living imaging in tumor-bearing mice, which broaden the nanocapsule for in vivo theranostics and may offer new opportunities for precise medicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Evolving phage vectors for cell targeted gene delivery.

    PubMed

    Larocca, David; Burg, Michael A; Jensen-Pergakes, Kristen; Ravey, Edward Prenn; Gonzalez, Ana Maria; Baird, Andrew

    2002-03-01

    We adapted filamentous phage vectors for targeted gene delivery to mammalian cells by inserting a mammalian reporter gene expression cassette (GFP) into the vector backbone and fusing the pIII coat protein to a cell targeting ligand (i.e. FGF2, EGF). Like transfection with animal viral vectors, targeted phage gene delivery is concentration, time, and ligand dependent. Importantly, targeted phage particles are specific for the appropriate target cell surface receptor. Phage have distinct advantages over existing gene therapy vectors because they are simple, economical to produce at high titer, have no intrinsic tropism for mammalian cells, and are relatively simple to genetically modify and evolve. Initially transduction by targeted phage particles was low resulting in foreign gene expression in 1-2% of transfected cells. We increased transduction efficiency by modifying both the transfection protocol and vector design. For example, we stabilized the display of the targeting ligand to create multivalent phagemid-based vectors with transduction efficiencies of up to 45% in certain cell lines when combined with genotoxic treatment. Taken together, these studies establish that the efficiency of phage-mediated gene transfer can be significantly improved through genetic modification. We are currently evolving phage vectors with enhanced cell targeting, increased stability, reduced immunogenicity and other properties suitable for gene therapy.

  11. Variations of characteristic time scales in rotating stratified turbulence using a large parametric numerical study.

    PubMed

    Rosenberg, D; Marino, R; Herbert, C; Pouquet, A

    2016-01-01

    We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.

  12. A Markovian model of evolving world input-output network

    PubMed Central

    Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145

  13. Clear: Composition of Likelihoods for Evolve and Resequence Experiments.

    PubMed

    Iranmehr, Arya; Akbari, Ali; Schlötterer, Christian; Bafna, Vineet

    2017-06-01

    The advent of next generation sequencing technologies has made whole-genome and whole-population sampling possible, even for eukaryotes with large genomes. With this development, experimental evolution studies can be designed to observe molecular evolution "in action" via evolve-and-resequence (E&R) experiments. Among other applications, E&R studies can be used to locate the genes and variants responsible for genetic adaptation. Most existing literature on time-series data analysis often assumes large population size, accurate allele frequency estimates, or wide time spans. These assumptions do not hold in many E&R studies. In this article, we propose a method-composition of likelihoods for evolve-and-resequence experiments (Clear)-to identify signatures of selection in small population E&R experiments. Clear takes whole-genome sequences of pools of individuals as input, and properly addresses heterogeneous ascertainment bias resulting from uneven coverage. Clear also provides unbiased estimates of model parameters, including population size, selection strength, and dominance, while being computationally efficient. Extensive simulations show that Clear achieves higher power in detecting and localizing selection over a wide range of parameters, and is robust to variation of coverage. We applied the Clear statistic to multiple E&R experiments, including data from a study of adaptation of Drosophila melanogaster to alternating temperatures and a study of outcrossing yeast populations, and identified multiple regions under selection with genome-wide significance. Copyright © 2017 by the Genetics Society of America.

  14. Effects of evolving quality of landfill leachate on microbial fuel cell performance.

    PubMed

    Li, Simeng; Chen, Gang

    2018-01-01

    Microbial fuel cell (MFC) is a novel technology for landfill leachate treatment with simultaneous electric power generation. In recent years, more and more modern landfills are operating as bioreactors to shorten the time required for landfill stabilization and improve the leachate quality. For landfills to operate as biofilters, leachate is recirculated back to the landfill, during which time the organics of the leachate can be decomposed. Continuous recirculation typically results in evolving leachate quality, which chronologically corresponds to evolution stages such as hydrolysis, acidogenesis, acetogenesis, methanogenesis, and maturation. In this research, variable power generation (160 to 230 mW m -2 ) by MFC was observed when leachate of various evolutionary stages was used as the feed. The power density followed a Monod-type kinetic model with the chemical oxygen demand (COD) equivalent of the volatile fatty acids (VFAs) ( p < 0.001). The coulombic efficiency decreased from 20% to 14% as the leachate evolved towards maturation. The maximum power density linearly decreased with the increase of internal resistance, resulting from the change of the conductivity of the solution. The decreased conductivity boosted the internal resistance and consequently limited the power generation. COD removal as high as 90% could be achieved with leachate extracted from appropriate evolutionary stages, with a maximum energy yield of 0.9 kWh m -3 of leachate. This study demonstrated the importance of the evolving leachate quality in different evolutionary stages for the performance of leachate-fed MFCs. The leachate extracted from acidogenesis and acetogenesis were optimal for both COD reduction and energy production in MFCs.

  15. A slowly evolving host moves first in symbiotic interactions

    NASA Astrophysics Data System (ADS)

    Damore, James; Gore, Jeff

    2011-03-01

    Symbiotic relationships, both parasitic and mutualistic, are ubiquitous in nature. Understanding how these symbioses evolve, from bacteria and their phages to humans and our gut microflora, is crucial in understanding how life operates. Often, symbioses consist of a slowly evolving host species with each host only interacting with its own sub-population of symbionts. The Red Queen hypothesis describes coevolutionary relationships as constant arms races with each species rushing to evolve an advantage over the other, suggesting that faster evolution is favored. Here, we use a simple game theoretic model of host- symbiont coevolution that includes population structure to show that if the symbionts evolve much faster than the host, the equilibrium distribution is the same as it would be if it were a sequential game where the host moves first against its symbionts. For the slowly evolving host, this will prove to be advantageous in mutualisms and a handicap in antagonisms. The model allows for symbiont adaptation to its host, a result that is robust to changes in the parameters and generalizes to continuous and multiplayer games. Our findings provide insight into a wide range of symbiotic phenomena and help to unify the field of coevolutionary theory.

  16. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  17. IRAS observations of extended dust envelopes around evolved stars

    NASA Technical Reports Server (NTRS)

    Hawkins, George

    1990-01-01

    Deconvolved IRAS profiles, with resolution 2-3 time better than detector sizes 1.5 and 3 arcmin at 60 and 100 microns, are presented for a number of evolved stars with extended emission. These include VY UMa, Mu Cep, S Sct, U Hya, Y CVn, U Ant, alpha Ori, Y Pav, UU aur, IRC + 10216, RZ Sgr, and R Lyr. Simple models suggest that extended IRAS emission results from stars which greater mass loss rates in the past, rather than from stars with large current mass loss rates.

  18. Highly-evolved stars

    NASA Technical Reports Server (NTRS)

    Heap, S. R.

    1981-01-01

    The ways in which the IUE has proved useful in studying highly evolved stars are reviewed. The importance of high dispersion spectra for abundance analyses of the sd0 stars and for studies of the wind from the central star of NGC 6543 and the wind from the 0 type component of Vela X-1 is shown. Low dispersion spectra are used for absolute spectrophotometry of the dwarf nova, Ex Hya. Angular resolution is important for detecting and locating UV sources in globular clusters.

  19. Smart signal processing for an evolving electric grid

    NASA Astrophysics Data System (ADS)

    Silva, Leandro Rodrigues Manso; Duque, Calos Augusto; Ribeiro, Paulo F.

    2015-12-01

    Electric grids are interconnected complex systems consisting of generation, transmission, distribution, and active loads, recently called prosumers as they produce and consume electric energy. Additionally, these encompass a vast array of equipment such as machines, power transformers, capacitor banks, power electronic devices, motors, etc. that are continuously evolving in their demand characteristics. Given these conditions, signal processing is becoming an essential assessment tool to enable the engineer and researcher to understand, plan, design, and operate the complex and smart electronic grid of the future. This paper focuses on recent developments associated with signal processing applied to power system analysis in terms of characterization and diagnostics. The following techniques are reviewed and their characteristics and applications discussed: active power system monitoring, sparse representation of power system signal, real-time resampling, and time-frequency (i.e., wavelets) applied to power fluctuations.

  20. Evolving mobile robots able to display collective behaviors.

    PubMed

    Baldassarre, Gianluca; Nolfi, Stefano; Parisi, Domenico

    2003-01-01

    We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior.

  1. Executive Function Effects and Numerical Development in Children: Behavioural and ERP Evidence from a Numerical Stroop Paradigm

    ERIC Educational Resources Information Center

    Soltesz, Fruzsina; Goswami, Usha; White, Sonia; Szucs, Denes

    2011-01-01

    Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the…

  2. Holographic Imaging of Evolving Laser-Plasma Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downer, Michael; Shvets, G.

    In the 1870s, English photographer Eadweard Muybridge captured motion pictures within one cycle of a horse’s gallop, which settled a hotly debated question of his time by showing that the horse became temporarily airborne. In the 1940s, Manhattan project photographer Berlin Brixner captured a nuclear blast at a million frames per second, and resolved a dispute about the explosion’s shape and speed. In this project, we developed methods to capture detailed motion pictures of evolving, light-velocity objects created by a laser pulse propagating through matter. These objects include electron density waves used to accelerate charged particles, laser-induced refractive index changesmore » used for micromachining, and ionization tracks used for atmospheric chemical analysis, guide star creation and ranging. Our “movies”, like Muybridge’s and Brixner’s, are obtained in one shot, since the laser-created objects of interest are insufficiently repeatable for accurate stroboscopic imaging. Our high-speed photographs have begun to resolve controversies about how laser-created objects form and evolve, questions that previously could be addressed only by intensive computer simulations based on estimated initial conditions. Resolving such questions helps develop better tabletop particle accelerators, atmospheric ranging devices and many other applications of laser-matter interactions. Our photographic methods all begin by splitting one or more “probe” pulses from the laser pulse that creates the light-speed object. A probe illuminates the object and obtains information about its structure without altering it. We developed three single-shot visualization methods that differ in how the probes interact with the object of interest or are recorded. (1) Frequency-Domain Holography (FDH). In FDH, there are 2 probes, like “object” and “reference” beams in conventional holography. Our “object” probe surrounds the light-speed object, like a fleas swarming

  3. Short-time dynamics of molecular junctions after projective measurement

    NASA Astrophysics Data System (ADS)

    Tang, Gaomin; Xing, Yanxia; Wang, Jian

    2017-08-01

    In this work, we study the short-time dynamics of a molecular junction described by Anderson-Holstein model using full-counting statistics after projective measurement. The coupling between the central quantum dot (QD) and two leads was turned on at remote past and the system is evolved to steady state at time t =0 , when we perform the projective measurement in one of the lead. Generating function for the charge transfer is expressed as a Fredholm determinant in terms of Keldysh nonequilibrium Green's function in the time domain. It is found that the current is not constant at short times indicating that the measurement does perturb the system. We numerically compare the current behaviors after the projective measurement with those in the transient regime where the subsystems are connected at t =0 . The universal scaling for high-order cumulants is observed for the case with zero QD occupation due to the unidirectional transport at short times. The influences of electron-phonon interaction on short-time dynamics of electric current, shot noise, and differential conductance are analyzed.

  4. Direct Numerical Simulation of a Weakly Stratified Turbulent Wake

    NASA Technical Reports Server (NTRS)

    Redford, J. A.; Lund, T. S.; Coleman, Gary N.

    2014-01-01

    Direct numerical simulation (DNS) is used to investigate a time-dependent turbulent wake evolving in a stably stratified background. A large initial Froude number is chosen to allow the wake to become fully turbulent and axisymmetric before stratification affects the spreading rate of the mean defect. The uncertainty introduced by the finite sample size associated with gathering statistics from a simulation of a time-dependent flow is reduced, compared to earlier simulations of this flow. The DNS reveals the buoyancy-induced changes to the turbulence structure, as well as to the mean-defect history and the terms in the mean-momentum and turbulence-kinetic-energy budgets, that characterize the various states of this flow - namely the three-dimensional (essentially unstratified), non-equilibrium (or 'wake-collapse') and quasi-two-dimensional (or 'two-component') regimes observed elsewhere for wakes embedded in both weakly and strongly stratified backgrounds. The wake-collapse regime is not accompanied by transfer (or 'reconversion') of the potential energy of the turbulence to the kinetic energy of the turbulence, implying that this is not an essential feature of stratified-wake dynamics. The dependence upon Reynolds number of the duration of the wake-collapse period is demonstrated, and the effect of the details of the initial/near-field conditions of the wake on its subsequent development is examined.

  5. Molecular abundances and C/O ratios in chemically evolving planet-forming disk midplanes

    NASA Astrophysics Data System (ADS)

    Eistrup, Christian; Walsh, Catherine; van Dishoeck, Ewine F.

    2018-05-01

    Context. Exoplanet atmospheres are thought be built up from accretion of gas as well as pebbles and planetesimals in the midplanes of planet-forming disks. The chemical composition of this material is usually assumed to be unchanged during the disk lifetime. However, chemistry can alter the relative abundances of molecules in this planet-building material. Aims: We aim to assess the impact of disk chemistry during the era of planet formation. This is done by investigating the chemical changes to volatile gases and ices in a protoplanetary disk midplane out to 30 AU for up to 7 Myr, considering a variety of different conditions, including a physical midplane structure that is evolving in time, and also considering two disks with different masses. Methods: An extensive kinetic chemistry gas-grain reaction network was utilised to evolve the abundances of chemical species over time. Two disk midplane ionisation levels (low and high) were explored, as well as two different makeups of the initial abundances ("inheritance" or "reset"). Results: Given a high level of ionisation, chemical evolution in protoplanetary disk midplanes becomes significant after a few times 105 yr, and is still ongoing by 7 Myr between the H2O and the O2 icelines. Inside the H2O iceline, and in the outer, colder regions of the disk midplane outside the O2 iceline, the relative abundances of the species reach (close to) steady state by 7 Myr. Importantly, the changes in the abundances of the major elemental carbon and oxygen-bearing molecules imply that the traditional "stepfunction" for the C/O ratios in gas and ice in the disk midplane (as defined by sharp changes at icelines of H2O, CO2 and CO) evolves over time, and cannot be assumed fixed, with the C/O ratio in the gas even becoming smaller than the C/O ratio in the ice. In addition, at lower temperatures (<29 K), gaseous CO colliding with the grains gets converted into CO2 and other more complex ices, lowering the CO gas abundance between

  6. Two-actor conflict with time delay: A dynamical model

    NASA Astrophysics Data System (ADS)

    Qubbaj, Murad R.; Muneepeerakul, Rachata

    2012-11-01

    Recent mathematical dynamical models of the conflict between two different actors, be they nations, groups, or individuals, have been developed that are capable of predicting various outcomes depending on the chosen feedback strategies, initial conditions, and the previous states of the actors. In addition to these factors, this paper examines the effect of time delayed feedback on the conflict dynamics. Our analysis shows that under certain initial and feedback conditions, a stable neutral equilibrium of conflict may destabilize for some critical values of time delay, and the two actors may evolve to new emotional states. We investigate the results by constructing critical delay surfaces for different sets of parameters and analyzing results from numerical simulations. These results provide new insights regarding conflict and conflict resolution and may help planners in adjusting and assessing their strategic decisions.

  7. Towards Evolving Electronic Circuits for Autonomous Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris

    2000-01-01

    The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.

  8. The evolvability of programmable hardware.

    PubMed

    Raman, Karthik; Wagner, Andreas

    2011-02-06

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected 'neutral networks' in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 10(45) logic circuits ('genotypes') and 10(19) logic functions ('phenotypes'). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry.

  9. The evolvability of programmable hardware

    PubMed Central

    Raman, Karthik; Wagner, Andreas

    2011-01-01

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected ‘neutral networks’ in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 1045 logic circuits (‘genotypes’) and 1019 logic functions (‘phenotypes’). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry. PMID:20534598

  10. An evolving Mars telecommunications network to enable exploration and increase science data return

    NASA Technical Reports Server (NTRS)

    Edwards, Chad; Komarek, Tomas A.; Noreen, Gary K.; Wilson, Gregory R.

    2003-01-01

    The coming decade of Mars exploration involves a variety of unique telecommunications challenges. Increasing spatial and spectral resolution of in situ science instruments drive the need for increased bandwidth. At the same time, many innovative and low-cost in situ mission concepts are enabled by energy-efficient relay communications. In response to these needs, the Mars Exploration Program has established a plan for an evolving orbital infrastructure that can provide enhancing and enabling telecommunications services to future Mars missions. We will present the evolving capabilities of this network over the coming decade in terms of specific quantitative metrics such as data volume per sol and required lander energy per Gb of returned data for representative classes of Mars exploration spacecraft.

  11. Evolved dispersal strategies at range margins

    PubMed Central

    Dytham, Calvin

    2009-01-01

    Dispersal is a key component of a species's ecology and will be under different selection pressures in different parts of the range. For example, a long-distance dispersal strategy suitable for continuous habitat at the range core might not be favoured at the margin, where the habitat is sparse. Using a spatially explicit, individual-based, evolutionary simulation model, the dispersal strategies of an organism that has only one dispersal event in its lifetime, such as a plant or sessile animal, are considered. Within the model, removing habitat, increasing habitat turnover, increasing the cost of dispersal, reducing habitat quality or altering vital rates imposes range limits. In most cases, there is a clear change in the dispersal strategies across the range, although increasing death rate towards the margin has little impact on evolved dispersal strategy across the range. Habitat turnover, reduced birth rate and reduced habitat quality all increase evolved dispersal distances at the margin, while increased cost of dispersal and reduced habitat density lead to lower evolved dispersal distances at the margins. As climate change shifts suitable habitat poleward, species ranges will also start to shift, and it will be the dispersal capabilities of marginal populations, rather than core populations, that will influence the rate of range shifting. PMID:19324810

  12. Addendum to `numerical modeling of an enhanced very early time electromagnetic (VETEM) prototype system'

    USGS Publications Warehouse

    Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2000-01-01

    Two numerical models to simulate an enhanced very early time electromagnetic (VETEM) prototype system that is used for buried-object detection and environmental problems are presented. In the first model, the transmitting and receiving loop antennas accurately analyzed using the method of moments (MoM), and then conjugate gradient (CG) methods with the fast Fourier transform (FFT) are utilized to investigate the scattering from buried conducting plates. In the second model, two magnetic dipoles are used to replace the transmitter and receiver. Both the theory and formulation are correct and the simulation results for the primary magnetic field and the reflected magnetic field are accurate.

  13. Virtual Microscopy: A Useful Tool for Meeting Evolving Challenges in the Veterinary Medical Curriculum

    ERIC Educational Resources Information Center

    Kogan, Lori R.; Dowers, Kristy L.; Cerda, Jacey R.; Schoenfeld-Tacher, Regina M.; Stewart, Sherry M.

    2014-01-01

    Veterinary schools, similar to many professional health programs, face a myriad of evolving challenges in delivering their professional curricula including expansion of class size, costs to maintain expensive laboratories, and increased demands on veterinary educators to use curricular time efficiently and creatively. Additionally, exponential…

  14. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  15. Evolvable mathematical models: A new artificial Intelligence paradigm

    NASA Astrophysics Data System (ADS)

    Grouchy, Paul

    We develop a novel Artificial Intelligence paradigm to generate autonomously artificial agents as mathematical models of behaviour. Agent/environment inputs are mapped to agent outputs via equation trees which are evolved in a manner similar to Symbolic Regression in Genetic Programming. Equations are comprised of only the four basic mathematical operators, addition, subtraction, multiplication and division, as well as input and output variables and constants. From these operations, equations can be constructed that approximate any analytic function. These Evolvable Mathematical Models (EMMs) are tested and compared to their Artificial Neural Network (ANN) counterparts on two benchmarking tasks: the double-pole balancing without velocity information benchmark and the challenging discrete Double-T Maze experiments with homing. The results from these experiments show that EMMs are capable of solving tasks typically solved by ANNs, and that they have the ability to produce agents that demonstrate learning behaviours. To further explore the capabilities of EMMs, as well as to investigate the evolutionary origins of communication, we develop NoiseWorld, an Artificial Life simulation in which interagent communication emerges and evolves from initially noncommunicating EMM-based agents. Agents develop the capability to transmit their x and y position information over a one-dimensional channel via a complex, dialogue-based communication scheme. These evolved communication schemes are analyzed and their evolutionary trajectories examined, yielding significant insight into the emergence and subsequent evolution of cooperative communication. Evolved agents from NoiseWorld are successfully transferred onto physical robots, demonstrating the transferability of EMM-based AIs from simulation into physical reality.

  16. Functional modules of sigma factor regulons guarantee adaptability and evolvability

    PubMed Central

    Binder, Sebastian C.; Eckweiler, Denitsa; Schulz, Sebastian; Bielecka, Agata; Nicolai, Tanja; Franke, Raimo; Häussler, Susanne; Meyer-Hermann, Michael

    2016-01-01

    The focus of modern molecular biology turns from assigning functions to individual genes towards understanding the expression and regulation of complex sets of molecules. Here, we provide evidence that alternative sigma factor regulons in the pathogen Pseudomonas aeruginosa largely represent insulated functional modules which provide a critical level of biological organization involved in general adaptation and survival processes. Analysis of the operational state of the sigma factor network revealed that transcription factors functionally couple the sigma factor regulons and significantly modulate the transcription levels in the face of challenging environments. The threshold quality of newly evolved transcription factors was reached faster and more robustly in in silico testing when the structural organization of sigma factor networks was taken into account. These results indicate that the modular structures of alternative sigma factor regulons provide P. aeruginosa with a robust framework to function adequately in its environment and at the same time facilitate evolutionary change. Our data support the view that widespread modularity guarantees robustness of biological networks and is a key driver of evolvability. PMID:26915971

  17. Predicting evolutionary rescue via evolving plasticity in stochastic environments

    PubMed Central

    Baskett, Marissa L.

    2016-01-01

    Phenotypic plasticity and its evolution may help evolutionary rescue in a novel and stressful environment, especially if environmental novelty reveals cryptic genetic variation that enables the evolution of increased plasticity. However, the environmental stochasticity ubiquitous in natural systems may alter these predictions, because high plasticity may amplify phenotype–environment mismatches. Although previous studies have highlighted this potential detrimental effect of plasticity in stochastic environments, they have not investigated how it affects extinction risk in the context of evolutionary rescue and with evolving plasticity. We investigate this question here by integrating stochastic demography with quantitative genetic theory in a model with simultaneous change in the mean and predictability (temporal autocorrelation) of the environment. We develop an approximate prediction of long-term persistence under the new pattern of environmental fluctuations, and compare it with numerical simulations for short- and long-term extinction risk. We find that reduced predictability increases extinction risk and reduces persistence because it increases stochastic load during rescue. This understanding of how stochastic demography, phenotypic plasticity, and evolution interact when evolution acts on cryptic genetic variation revealed in a novel environment can inform expectations for invasions, extinctions, or the emergence of chemical resistance in pests. PMID:27655762

  18. The Numerical Electromagnetics Code (NEC) - A Brief History

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; Miller, E K; Poggio, A J

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code,more » the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.« less

  19. Time evolving multi-city dependencies and robustness tradeoffs for risk-based portfolios of conservation, transfers, and cooperative water supply infrastructure development pathways

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.; Zeff, H. B.; Characklis, G. W.

    2016-12-01

    Water scarcity in historically water-rich regions such as the southeastern United States is becoming a more prevalent concern. It has been shown that cooperative short-term planning that relies on conservation and transfers of existing supplies amongst communities can be used by water utilities to mitigate the effects of water scarcity in the near future. However, in the longer term, infrastructure expansion is likely to be necessary to address imbalances between growing water demands and the available supply capacity. This study seeks to better diagnose and avoid candidate modes for system failure. Although it is becoming more common for water utilities to evaluate the robustness of their water supply, defined as the insensitivity of their systems to errors in deeply uncertain projections or assumptions, defining robustness is particularly challenging in multi-stakeholder regional contexts for decisions that encompass short management actions and long-term infrastructure planning. Planning and management decisions are highly interdependent and strongly shape how a region's infrastructure itself evolves. This research advances the concept of system robustness by making it evolve over time rather than static, so that it is applicable to an adaptive system and therefore more suited for use for combined short and long-term planning efforts. The test case for this research is the Research Triangle area of North Carolina, where the cities of Raleigh, Durham, Cary and Chapel Hill are experiencing rapid population growth and increasing concerns over drought. This study is facilitating their engagement in cooperative and robust regional water portfolio planning. The insights from this work have general merit for regions where adjacent municipalities can benefit from improving cooperative infrastructure investments and more efficient resource management strategies.

  20. Relativistic numerical cosmology with silent universes

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2018-01-01

    Relativistic numerical cosmology is most often based either on the exact solutions of the Einstein equations, or perturbation theory, or weak-field limit, or the BSSN formalism. The silent universe provides an alternative approach to investigate relativistic evolution of cosmological systems. The silent universe is based on the solution of the Einstein equations in 1  +  3 comoving coordinates with additional constraints imposed. These constraints include: the gravitational field is sourced by dust and cosmological constant only, both rotation and magnetic part of the Weyl tensor vanish, and the shear is diagnosable. This paper describes the code simsilun (free software distributed under the terms of the reposi General Public License), which implements the equations of the silent universe. The paper also discusses applications of the silent universe and it uses the Millennium simulation to set up the initial conditions for the code simsilun. The simulation obtained this way consists of 16 777 216 worldlines, which are evolved from z  =  80 to z  =  0. Initially, the mean evolution (averaged over the whole domain) follows the evolution of the background ΛCDM model. However, once the evolution of cosmic structures becomes nonlinear, the spatial curvature evolves from ΩK =0 to ΩK ≈ 0.1 at the present day. The emergence of the spatial curvature is associated with ΩM and Ω_Λ being smaller by approximately 0.05 compared to the ΛCDM.

  1. Analytical and numerical treatment of the heat conduction equation obtained via time-fractional distributed-order heat conduction law

    NASA Astrophysics Data System (ADS)

    Želi, Velibor; Zorica, Dušan

    2018-02-01

    Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.

  2. Recommendation in evolving online networks

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Zeng, An; Shang, Ming-Sheng

    2016-02-01

    Recommender system is an effective tool to find the most relevant information for online users. By analyzing the historical selection records of users, recommender system predicts the most likely future links in the user-item network and accordingly constructs a personalized recommendation list for each user. So far, the recommendation process is mostly investigated in static user-item networks. In this paper, we propose a model which allows us to examine the performance of the state-of-the-art recommendation algorithms in evolving networks. We find that the recommendation accuracy in general decreases with time if the evolution of the online network fully depends on the recommendation. Interestingly, some randomness in users' choice can significantly improve the long-term accuracy of the recommendation algorithm. When a hybrid recommendation algorithm is applied, we find that the optimal parameter gradually shifts towards the diversity-favoring recommendation algorithm, indicating that recommendation diversity is essential to keep a high long-term recommendation accuracy. Finally, we confirm our conclusions by studying the recommendation on networks with the real evolution data.

  3. Small Private Online Research: A Proposal for A Numerical Methods Course Based on Technology Use and Blended Learning

    ERIC Educational Resources Information Center

    Cepeda, Francisco Javier Delgado

    2017-01-01

    This work presents a proposed model in blended learning for a numerical methods course evolved from traditional teaching into a research lab in scientific visualization. The blended learning approach sets a differentiated and flexible scheme based on a mobile setup and face to face sessions centered on a net of research challenges. Model is…

  4. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  5. Project Evolve User-Adopter Manual.

    ERIC Educational Resources Information Center

    Joiner, Lee M.

    An adult basic education (ABE) program for mentally retarded young adults between the ages of 14 and 26 years, Project Evolve can provide education agencies for educationally handicapped children with detailed information concerning an innovative program. The manual format was developed through interviews with professional educators concerning the…

  6. The evolving role of telecommunications switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Personick, S.D.

    1993-01-01

    There are many forces impacting on the evolution of switching vis-a-vis its role in telecommunications/information networking. Many of the technologies that in the past 15 years have enabled the cost reductions the industry has experienced in digital switches, and the emergence of intelligent networks are now also enabling a wide range of new end-user applications. Many of these applications are rapidly emerging and evolving to meet the, as yet, uncertain needs of the marketplace. There is an explosion of new ideas for applications involving personalized, nomadic communications, multimedia communications, and information access. Some of these will succeed in the marketplacemore » and some will not. There is a continuing emergence of new and improved underlying electronic and photonic technologies and, most recently, the emergence of reliable, secure distributed computing, communications, and management environments. End-user CPE and servers have become increasingly powerful and cost effective as places to locate session (call) management and session enabling objects such as user-interfaces, directories, agents, multimedia bridges, and storage/server subsystems. Not only are dramatically new paradigms for building networks to support existing applications possible, but there is a pressing need to support the emerging and evolving new applications in a timely way. Competition is accelerating the rate of introduction of new technologies, architectures, and telecommunication services. Every aspect of the business is being reexamined to find better ways of meeting customers' needs more efficiently. Meanwhile, as new applications become deployed, there are increasing pressures to provide for security, privacy, and network integrity. This article reviews the author's personal views (many of which are widely shared by others) of the implications of all of these forces on what we traditionally call telecommunications switching. 10 refs.« less

  7. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  8. The Evolving Market Structure of the U.S. Residential Solar PV Installation

    Science.gov Websites

    Solar PV Installation Industry, 2000-2016 The Evolving Market Structure of the U.S. Residential Solar PV residential solar photovoltaic (PV) system and that the residential PV installation industry has become more concentrated over time. From 2000 to 2016, the U.S. residential solar photovoltaic (PV) installation industry

  9. How Life and Rocks Have Co-Evolved

    NASA Astrophysics Data System (ADS)

    Hazen, R.

    2014-04-01

    The near-surface environment of terrestrial planets and moons evolves as a consequence of selective physical, chemical, and biological processes - an evolution that is preserved in the mineralogical record. Mineral evolution begins with approximately 12 different refractory minerals that form in the cooling envelopes of exploding stars. Subsequent aqueous and thermal alteration of planetessimals results in the approximately 250 minerals now found in unweathered lunar and meteorite samples. Following Earth's accretion and differentiation, mineral evolution resulted from a sequence of geochemical and petrologic processes, which led to perhaps 1500 mineral species. According to some origin-of-life scenarios, a planet must progress through at least some of these stages of chemical processing as a prerequisite for life. Once life emerged, mineralogy and biology co-evolved and dramatically increased Earth's mineral diversity to >4000 species. Sequential stages of a planet's near-surface evolution arise from three primary mechanisms: (1) the progressive separation and concentration of the elements from their original relatively uniform distribution in the presolar nebula; (2) the increase in range of intensive variables such as pressure, temperature, and volatile activities; and (3) the generation of far-from-equilibrium conditions by living systems. Remote observations of the mineralogy of other terrestrial bodies may thus provide evidence for biological influences beyond Earth. Recent studies of mineral diversification through time reveal striking correlations with major geochemical, tectonic, and biological events, including large-changes in ocean chemistry, the supercontinent cycle, the increase of atmospheric oxygen, and the rise of the terrestrial biosphere.

  10. Evolving Concepts of Asthma

    PubMed Central

    Ray, Anuradha; Wenzel, Sally E.

    2015-01-01

    Our understanding of asthma has evolved over time from a singular disease to a complex of various phenotypes, with varied natural histories, physiologies, and responses to treatment. Early therapies treated most patients with asthma similarly, with bronchodilators and corticosteroids, but these therapies had varying degrees of success. Similarly, despite initial studies that identified an underlying type 2 inflammation in the airways of patients with asthma, biologic therapies targeted toward these type 2 pathways were unsuccessful in all patients. These observations led to increased interest in phenotyping asthma. Clinical approaches, both biased and later unbiased/statistical approaches to large asthma patient cohorts, identified a variety of patient characteristics, but they also consistently identified the importance of age of onset of disease and the presence of eosinophils in determining clinically relevant phenotypes. These paralleled molecular approaches to phenotyping that developed an understanding that not all patients share a type 2 inflammatory pattern. Using biomarkers to select patients with type 2 inflammation, repeated trials of biologics directed toward type 2 cytokine pathways saw newfound success, confirming the importance of phenotyping in asthma. Further research is needed to clarify additional clinical and molecular phenotypes, validate predictive biomarkers, and identify new areas for possible interventions. PMID:26161792

  11. Minority games, evolving capitals and replicator dynamics

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Zhang, Yi-Cheng

    2009-11-01

    We discuss a simple version of the minority game (MG) in which agents hold only one strategy each, but in which their capitals evolve dynamically according to their success and in which the total trading volume varies in time accordingly. This feature is known to be crucial for MGs to reproduce stylized facts of real market data. The stationary states and phase diagram of the model can be computed, and we show that the ergodicity breaking phase transition common for MGs, and marked by a divergence of the integrated response, is present also in this simplified model. An analogous majority game turns out to be relatively void of interesting features, and the total capital is found to diverge in time. Introducing a restraining force leads to a model akin to the replicator dynamics of evolutionary game theory, and we demonstrate that here a different type of phase transition is observed. Finally we briefly discuss the relation of this model with one strategy per player to more sophisticated minority games with dynamical capitals and several trading strategies per agent.

  12. Numerical processing efficiency improved in children using mental abacus: ERP evidence utilizing a numerical Stroop task

    PubMed Central

    Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan

    2015-01-01

    This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012

  13. A Course Evolves-Physical Anthropology.

    ERIC Educational Resources Information Center

    O'Neil, Dennis

    2001-01-01

    Describes the development of an online physical anthropology course at Palomar College (California) that evolved from online tutorials. Discusses the ability to update materials on the Web more quickly than in traditional textbooks; creating Web pages that are readable by most Web browsers; test security issues; and clarifying ownership of online…

  14. Time-resolved spectroscopy at surfaces and adsorbate dynamics: Insights from a model-system approach

    NASA Astrophysics Data System (ADS)

    Boström, Emil; Mikkelsen, Anders; Verdozzi, Claudio

    2016-05-01

    We introduce a model description of femtosecond laser induced desorption at surfaces. The substrate part of the system is taken into account as a (possibly semi-infinite) linear chain. Here, being especially interested in the early stages of dissociation, we consider a finite-size implementation of the model (i.e., a finite substrate), for which an exact numerical solution is possible. By time-evolving the many-body wave function, and also using results from a time-dependent density functional theory description for electron-nuclear systems, we analyze the competition between several surface-response mechanisms and electronic correlations in the transient and longer time dynamics under the influence of dipole-coupled fields. Our model allows us to explore how coherent multiple-pulse protocols can impact desorption in a variety of prototypical experiments.

  15. On numerical model of one-dimensional time-dependent gas flows through bed of encapsulated phase change material

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.; Fetsov, S. S.

    2017-10-01

    Mathematical model and numerical method are proposed for investigating the one-dimensional time-dependent gas flows through a packed bed of encapsulated Phase Change Material (PCM). The model is based on the assumption of interacting interpenetrating continua and includes equations of state, continuity, momentum conservation and energy for PCM and gas. The advantage of the method is that it does not require predicting the location of phase transition zone and can define it automatically as in a usual shock-capturing method. One of the applications of the developed numerical model is the simulation of novel Adiabatic Compressed Air Energy Storage system (A-CAES) with Thermal Energy Storage subsystem (TES) based on using the encapsulated PCM in packed bed. Preliminary test calculations give hope that the method can be effectively applied in the future for modelling the charge and discharge processes in such TES with PCM.

  16. Signing Apes and Evolving Linguistics.

    ERIC Educational Resources Information Center

    Stokoe, William C.

    Linguistics retains from its antecedents, philology and the study of sacred writings, some of their apologetic and theological bias. Thus it has not been able to face squarely the question how linguistic function may have evolved from animal communication. Chimpanzees' use of signs from American Sign Language forces re-examination of language…

  17. The Evolving Demand for Skills.

    ERIC Educational Resources Information Center

    Greenspan, Alan

    From a macroeconomic perspective, the evolving demand for skills in the United States has been triggered by the accelerated expansion of computer and information technology, which has, in turn, brought significant changes to the workplace. Technological advances have made some wholly manual jobs obsolete. But even for many other workers, a rapidly…

  18. Numerical simulation of granular flows : comparison with experimental results

    NASA Astrophysics Data System (ADS)

    Pirulli, M.; Mangeney-Castelnau, A.; Lajeunesse, E.; Vilotte, J.-P.; Bouchut, F.; Bristeau, M. O.; Perthame, B.

    2003-04-01

    Granular avalanches such as rock or debris flows regularly cause large amounts of human and material damages. Numerical simulation of granular avalanches should provide a useful tool for investigating, within realistic geological contexts, the dynamics of these flows and of their arrest phase and for improving the risk assessment of such natural hazards. Validation of debris avalanche numerical model on granular experiments over inclined plane is performed here. The comparison is performed by simulating granular flow of glass beads from a reservoir through a gate down an inclined plane. This unsteady situation evolves toward the steady state observed in the laboratory. Furthermore simulation exactly reproduces the arrest phase obtained by suddenly closing the gate of the reservoir once a thick flow has developped. The spreading of a granular mass released from rest at the top of a rough inclined plane is also investigated. The evolution of the avalanche shape, the velocity and the characteristics of the arrest phase are compared with experimental results and analysis of the involved forces are studied for various flow laws.

  19. Nonlinear modelling in time domain numerical analysis of stringed instrument dynamics

    NASA Astrophysics Data System (ADS)

    Bielski, Paweł; Kujawa, Marcin

    2017-03-01

    Musical instruments are very various in terms of sound quality with their timbre shaped by materials and geometry. Materials' impact is commonly treated as dominant one by musicians, while it is unclear whether it is true or not. The research proposed in the study focuses on determining influence of both these factors on sound quality based on their impact on harmonic composition. Numerical approach has been chosen to allowed independent manipulation of geometrical and material parameters as opposed to experimental study subjected to natural randomness of instrument construction. Distinctive element of this research is precise modelling of whole instrument and treating it as one big vibrating system instead of performing modal analysis on an isolated part. Finite elements model of a stringed instrument has been built and a series of nonlinear time-domain dynamic analyses were executed to obtain displacement signals and perform subsequent spectral analysis. Precision of computations seems sufficient to determine the influence of instrument's macroscopic mechanical parameters on timbre. Further research should focus on implementation of acoustic medium in attempt to include dissipation and synchronization mechanisms. Outside the musical field this kind of research could be potentially useful in noise reduction problems.

  20. Simulating spontaneous aseismic and seismic slip events on evolving faults

    NASA Astrophysics Data System (ADS)

    Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras

    2017-04-01

    Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare

  1. Transistor Level Circuit Experiments using Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Keymeulen, D.; Ferguson, M. I.; Daud, Taher; Thakoor, A.

    2005-01-01

    The Jet Propulsion Laboratory (JPL) performs research in fault tolerant, long life, and space survivable electronics for the National Aeronautics and Space Administration (NASA). With that focus, JPL has been involved in Evolvable Hardware (EHW) technology research for the past several years. We have advanced the technology not only by simulation and evolution experiments, but also by designing, fabricating, and evolving a variety of transistor-based analog and digital circuits at the chip level. EHW refers to self-configuration of electronic hardware by evolutionary/genetic search mechanisms, thereby maintaining existing functionality in the presence of degradations due to aging, temperature, and radiation. In addition, EHW has the capability to reconfigure itself for new functionality when required for mission changes or encountered opportunities. Evolution experiments are performed using a genetic algorithm running on a DSP as the reconfiguration mechanism and controlling the evolvable hardware mounted on a self-contained circuit board. Rapid reconfiguration allows convergence to circuit solutions in the order of seconds. The paper illustrates hardware evolution results of electronic circuits and their ability to perform under 230 C temperature as well as radiations of up to 250 kRad.

  2. Understanding light scattering by a coated sphere part 2: time domain analysis.

    PubMed

    Laven, Philip; Lock, James A

    2012-08-01

    Numerical computations were made of scattering of an incident electromagnetic pulse by a coated sphere that is large compared to the dominant wavelength of the incident light. The scattered intensity was plotted as a function of the scattering angle and delay time of the scattered pulse. For fixed core and coating radii, the Debye series terms that most strongly contribute to the scattered intensity in different regions of scattering angle-delay time space were identified and analyzed. For a fixed overall radius and an increasing core radius, the first-order rainbow was observed to evolve into three separate components. The original component faded away, while the two new components eventually merged together. The behavior of surface waves generated by grazing incidence at the core/coating and coating/exterior interfaces was also examined and discussed.

  3. Dark soliton beats in the time-varying background of Bose-Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Lei; Li Lu; Zhang Jiefang

    2009-07-15

    We investigate the dynamics of dark solitons in one-dimensional Bose-Einstein condensates. In the large particle limit, by introducing the lens-type transformation, we find that the macroscopic wave function evolves self-similarly when its initial profile strays from that of the equilibrium state, which provides a time-varying background for the propagation of dark solitons. The interaction of dark solitons with this kind of background is studied both analytically and numerically. We find that the center-of-mass motion of the dark soliton is deeply affected by the time-varying background, and the beating phenomena of dark soliton emerge when the intrinsic frequency of the darkmore » soliton approaches that of the background. Lastly, we investigate the propagation of dark solitons in the freely expanding background.« less

  4. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2004-12-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  5. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2005-01-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  6. Evolving epidemiology of HIV-associated malignancies.

    PubMed

    Shiels, Meredith S; Engels, Eric A

    2017-01-01

    The purpose of this review is to describe the epidemiology of cancers that occur at an elevated rate among people with HIV infection in the current treatment era, including discussion of the cause of these cancers, as well as changes in cancer incidence and burden over time. Rates of Kaposi sarcoma, non-Hodgkin lymphoma and cervical cancer have declined sharply in developed countries during the highly active antiretroviral therapy era, but remain elevated 800-fold, 10-fold and four-fold, respectively, compared with the general population. Most studies have reported significant increases in liver cancer rates and decreases in lung cancer over time. Although some studies have reported significant increases in anal cancer rates and declines in Hodgkin lymphoma rates, others have shown stable incidence. Declining mortality among HIV-infected individuals has resulted in the growth and aging of the HIV-infected population, causing an increase in the number of non-AIDS-defining cancers diagnosed each year in HIV-infected people. The epidemiology of cancer among HIV-infected people has evolved since the beginning of the HIV epidemic with particularly marked changes since the introduction of modern treatment. Public health interventions aimed at prevention and early detection of cancer among HIV-infected people are needed.

  7. Queues on a Dynamically Evolving Graph

    NASA Astrophysics Data System (ADS)

    Mandjes, Michel; Starreveld, Nicos J.; Bekker, René

    2018-04-01

    This paper considers a population process on a dynamically evolving graph, which can be alternatively interpreted as a queueing network. The queues are of infinite-server type, entailing that at each node all customers present are served in parallel. The links that connect the queues have the special feature that they are unreliable, in the sense that their status alternates between `up' and `down'. If a link between two nodes is down, with a fixed probability each of the clients attempting to use that link is lost; otherwise the client remains at the origin node and reattempts using the link (and jumps to the destination node when it finds the link restored). For these networks we present the following results: (a) a system of coupled partial differential equations that describes the joint probability generating function corresponding to the queues' time-dependent behavior (and a system of ordinary differential equations for its stationary counterpart), (b) an algorithm to evaluate the (time-dependent and stationary) moments, and procedures to compute user-perceived performance measures which facilitate the quantification of the impact of the links' outages, (c) a diffusion limit for the joint queue length process. We include explicit results for a series relevant special cases, such as tandem networks and symmetric fully connected networks.

  8. Japanese experience of evolving nurses' roles in changing social contexts.

    PubMed

    Kanbara, S; Yamamoto, Y; Sugishita, T; Nakasa, T; Moriguchi, I

    2017-06-01

    To discuss the evolving roles of Japanese nurses in meeting the goals and concerns of ongoing global sustainable development. Japanese nurses' roles have evolved as the needs of the country and the communities they served, changed over time. The comprehensive public healthcare services in Japan were provided by the cooperation of hospitals and public health nurses. The nursing profession is exploring ways to identify and systemize nursing skills and competencies that address global health initiatives for sustainable development goals. This paper is based on the summary of a symposium, (part of the 2015 annual meeting of the Japan Association for International Health) with panel members including experts from Japan's Official Development Assistance. The evolving role of nurses in response to national and international needs is illustrated by nursing practices from Japan. Japanese public health nurses have also assisted overseas healthcare plans. In recent catastrophes, Japanese nurses assumed the roles of community health coordinators for restoration and maintenance of public health. The Japanese experience shows that nursing professionals are best placed to work with community health issues, high-risk situations and vulnerable communities. Their cooperation can address current social needs and help global communities to transform our world. Nurses have tremendous potential to make transformative changes in health and bring about the necessary paradigm shift. They must be involved in global sustainable development goals, health policies and disaster risk management. A mutual understanding of global citizen and nurses will help to renew and strengthen their capacities. Nursing professionals can contribute effectively to achieve national and global health goals and make transformative changes. © 2017 International Council of Nurses.

  9. Link Prediction in Evolving Networks Based on Popularity of Nodes.

    PubMed

    Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian

    2017-08-02

    Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.

  10. Numerical simulations of tropical cyclones with assimilation of satellite, radar and in-situ observations: lessons learned from recent field programs and real-time experimental forecasts

    NASA Astrophysics Data System (ADS)

    Pu, Z.; Zhang, L.

    2010-12-01

    The impact of data assimilation on the predictability of tropical cyclones is examined with the cases from recent field programs and real-time hurricane forecast experiments. Mesoscale numerical simulations are performed to simulate major typhoons during the T-PARC/TCS08 field campaign with the assimilation of satellite, radar and in-situ observations. Results confirmed that data assimilation has indeed resulted in improved numerical simulations of tropical cyclones. However, positive impacts from the satellite and radar data are strongly depend on the quality of these data. Specifically, it is found that the overall impacts of assimilating AIRS retrieved atmospheric temperature and moisture profiles on numerical simulations of tropical cyclones are very sensitive to the bias corrections of the data.For instance, the dry biases of moisture profiles can cause the decay of tropical cyclones in the numerical simulations.In addition, the quality of airborne Doppler radar data has strong influence on numerical simulations of tropical cyclones in terms of their track, intensity and precipitation structures. Outcomes from assimilating radar data with various quality thresholds suggest that a trade-off between the quality and area coverage of the radar data is necessary in the practice. Some of those experiences obtained from the field case studies are applied to the near-real time experimental hurricane forecasts during the 2010 hurricane season. Results and issues raised from the case studies and real-time experiments will be discussed.

  11. Relation of exact Gaussian basis methods to the dephasing representation: Theory and application to time-resolved electronic spectra

    NASA Astrophysics Data System (ADS)

    Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri

    2014-03-01

    We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.

  12. φ-evo: A program to evolve phenotypic models of biological networks.

    PubMed

    Henry, Adrien; Hemery, Mathieu; François, Paul

    2018-06-01

    Molecular networks are at the core of most cellular decisions, but are often difficult to comprehend. Reverse engineering of network architecture from their functions has proved fruitful to classify and predict the structure and function of molecular networks, suggesting new experimental tests and biological predictions. We present φ-evo, an open-source program to evolve in silico phenotypic networks performing a given biological function. We include implementations for evolution of biochemical adaptation, adaptive sorting for immune recognition, metazoan development (somitogenesis, hox patterning), as well as Pareto evolution. We detail the program architecture based on C, Python 3, and a Jupyter interface for project configuration and network analysis. We illustrate the predictive power of φ-evo by first recovering the asymmetrical structure of the lac operon regulation from an objective function with symmetrical constraints. Second, we use the problem of hox-like embryonic patterning to show how a single effective fitness can emerge from multi-objective (Pareto) evolution. φ-evo provides an efficient approach and user-friendly interface for the phenotypic prediction of networks and the numerical study of evolution itself.

  13. LES of Temporally Evolving Mixing Layers by Three High Order Schemes

    NASA Astrophysics Data System (ADS)

    Yee, H.; Sjögreen, B.; Hadjadj, A.

    2011-10-01

    The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.

  14. A multistage stochastic programming model for a multi-period strategic expansion of biofuel supply chain under evolving uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Fei; Huang, Yongxi

    Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.

  15. A multistage stochastic programming model for a multi-period strategic expansion of biofuel supply chain under evolving uncertainties

    DOE PAGES

    Xie, Fei; Huang, Yongxi

    2018-02-04

    Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.

  16. Ring-fault activity at subsiding calderas studied from analogue experiments and numerical modeling

    NASA Astrophysics Data System (ADS)

    Liu, Y. K.; Ruch, J.; Vasyura-Bathke, H.; Jonsson, S.

    2017-12-01

    Several subsiding calderas, such as the ones in the Galápagos archipelago and the Axial seamount in the Pacific Ocean have shown a complex but similar ground deformation pattern, composed of a broad deflation signal affecting the entire volcanic edifice and of a localized subsidence signal focused within the caldera. However, it is still debated how deep processes at subsiding calderas, including magmatic pressure changes, source locations and ring-faulting, relate to this observed surface deformation pattern. We combine analogue sandbox experiments with numerical modeling to study processes involved from initial subsidence to later collapse of calderas. The sandbox apparatus is composed of a motor driven subsiding half-piston connected to the bottom of a glass box. During the experiments the observation is done by five digital cameras photographing from various perspectives. We use Photoscan, a photogrammetry software and PIVLab, a time-resolved digital image correlation tool, to retrieve time-series of digital elevation models and velocity fields from acquired photographs. This setup allows tracking the processes acting both at depth and at the surface, and to assess their relative importance as the subsidence evolves to a collapse. We also use the Boundary Element Method to build a numerical model of the experiment setup, which comprises contracting sill-like source in interaction with a ring-fault in elastic half-space. We then compare our results from these two approaches with the examples observed in nature. Our preliminary experimental and numerical results show that at the initial stage of magmatic withdrawal, when the ring-fault is not yet well formed, broad and smooth deflation dominates at the surface. As the withdrawal increases, narrower subsidence bowl develops accompanied by the upward propagation of the ring-faulting. This indicates that the broad deflation, affecting the entire volcano edifice, is primarily driven by the contraction of the

  17. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  18. Photometric Study of Massive Evolved Galaxies in the CANDELS GOODS-S at z>3

    NASA Astrophysics Data System (ADS)

    Nayyeri, Hooshang; Mobasher, B.; Ferguson, H. C.; Wiklind, T.; Hemmati, S.; De Barros, S.; Fontana, A.; Dahlen, T.; Koekemoer, A. M.

    2014-01-01

    According to the hierarchical models, galaxies assemble their mass through time with the most massive and evolved systems found in the more recent times and in the most massive dark matter halos. Understanding the evolution of mass assembly with cosmic time plays a central role in observational astronomy. Here, we use the very deep near Infra-red HST/WFC3 observations by the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) to study passively evolving, old and massive systems at high redshifts. For this we utilize the pronounced Balmer Break (an age dependent diagnostic at rest-frame 3648Å) in post-starburst galaxies to devise a Balmer Break Galaxy (BBG) selection. We use the CANDELS WFC3 1.6 μm selected catalog in the GOODS-S, generated with TFIT algorithm suitable for mixed resolution data sets, to select the candidates. We identified 24 sources as candidates for evolved systems in the redshift 3.5

  19. Numerical study of heterogeneous mean temperature and shock wave in a resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Takeru

    2015-10-28

    When a frequency of gas oscillation in an acoustic resonator is sufficiently close to one of resonant frequencies of the resonator, the amplitude of gas oscillation becomes large and hence the nonlinear effect manifests itself. Then, if the dissipation effects due to viscosity and thermal conductivity of the gas are sufficiently small, the gas oscillation may evolve into the acoustic shock wave, in the so-called consonant resonators. At the shock front, the kinetic energy of gas oscillation is converted into heat by the dissipation process inside the shock layer, and therefore the temperature of the gas in the resonator rises.more » Since the acoustic shock wave travels in the resonator repeatedly over and over again, the temperature rise becomes noticeable in due course of time even if the shock wave is weak. We numerically study the gas oscillation with shock wave in a resonator of square cross section by solving the initial and boundary value problem of the system of three-dimensional Navier-Stokes equations with a finite difference method. In this case, the heat conduction across the boundary layer on the wall of resonator causes a spatially heterogeneous distribution of mean (time-averaged) gas temperature.« less

  20. Concurrent approach for evolving compact decision rule sets

    NASA Astrophysics Data System (ADS)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  1. An Evolving Asymmetric Game for Modeling Interdictor-Smuggler Problems

    DTIC Science & Technology

    2016-06-01

    ASYMMETRIC GAME FOR MODELING INTERDICTOR-SMUGGLER PROBLEMS by Richard J. Allain June 2016 Thesis Advisor: David L. Alderson Second Reader: W...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE AN EVOLVING ASYMMETRIC GAME FOR MODELING INTERDICTOR- SMUGGLER PROBLEMS 5. FUNDING NUMBERS 6...using incomplete feedback and allowing two-sided adaptive play. Combining these aspects in an evolving game , we use optimization, simulation, and

  2. Numerical Simulation of Two Phase Flows

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2001-01-01

    Two phase flows can be found in broad situations in nature, biology, and industry devices and can involve diverse and complex mechanisms. While the physical models may be specific for certain situations, the mathematical formulation and numerical treatment for solving the governing equations can be general. Hence, we will require information concerning each individual phase as needed in a single phase. but also the interactions between them. These interaction terms, however, pose additional numerical challenges because they are beyond the basis that we use to construct modern numerical schemes, namely the hyperbolicity of equations. Moreover, due to disparate differences in time scales, fluid compressibility and nonlinearity become acute, further complicating the numerical procedures. In this paper, we will show the ideas and procedure how the AUSM-family schemes are extended for solving two phase flows problems. Specifically, both phases are assumed in thermodynamic equilibrium, namely, the time scales involved in phase interactions are extremely short in comparison with those in fluid speeds and pressure fluctuations. Details of the numerical formulation and issues involved are discussed and the effectiveness of the method are demonstrated for several industrial examples.

  3. The void spectrum in two-dimensional numerical simulations of gravitational clustering

    NASA Technical Reports Server (NTRS)

    Kauffmann, Guinevere; Melott, Adrian L.

    1992-01-01

    An algorithm for deriving a spectrum of void sizes from two-dimensional high-resolution numerical simulations of gravitational clustering is tested, and it is verified that it produces the correct results where those results can be anticipated. The method is used to study the growth of voids as clustering proceeds. It is found that the most stable indicator of the characteristic void 'size' in the simulations is the mean fractional area covered by voids of diameter d, in a density field smoothed at its correlation length. Very accurate scaling behavior is found in power-law numerical models as they evolve. Eventually, this scaling breaks down as the nonlinearity reaches larger scales. It is shown that this breakdown is a manifestation of the undesirable effect of boundary conditions on simulations, even with the very large dynamic range possible here. A simple criterion is suggested for deciding when simulations with modest large-scale power may systematically underestimate the frequency of larger voids.

  4. A review of numerical models to predict the atmospheric dispersion of radionuclides.

    PubMed

    Leelőssy, Ádám; Lagzi, István; Kovács, Attila; Mészáros, Róbert

    2018-02-01

    The field of atmospheric dispersion modeling has evolved together with nuclear risk assessment and emergency response systems. Atmospheric concentration and deposition of radionuclides originating from an unintended release provide the basis of dose estimations and countermeasure strategies. To predict the atmospheric dispersion and deposition of radionuclides several numerical models are available coupled with numerical weather prediction (NWP) systems. This work provides a review of the main concepts and different approaches of atmospheric dispersion modeling. Key processes of the atmospheric transport of radionuclides are emission, advection, turbulent diffusion, dry and wet deposition, radioactive decay and other physical and chemical transformations. A wide range of modeling software are available to simulate these processes with different physical assumptions, numerical approaches and implementation. The most appropriate modeling tool for a specific purpose can be selected based on the spatial scale, the complexity of meteorology, land surface and physical and chemical transformations, also considering the available data and computational resource. For most regulatory and operational applications, offline coupled NWP-dispersion systems are used, either with a local scale Gaussian, or a regional to global scale Eulerian or Lagrangian approach. The dispersion model results show large sensitivity on the accuracy of the coupled NWP model, especially through the description of planetary boundary layer turbulence, deep convection and wet deposition. Improvement of dispersion predictions can be achieved by online coupling of mesoscale meteorology and atmospheric transport models. The 2011 Fukushima event was the first large-scale nuclear accident where real-time prognostic dispersion modeling provided decision support. Dozens of dispersion models with different approaches were used for prognostic and retrospective simulations of the Fukushima release. An unknown

  5. Evolving Systems: Adaptive Key Component Control and Inheritance of Passivity and Dissipativity

    NASA Technical Reports Server (NTRS)

    Frost, S. A.; Balas, M. J.

    2010-01-01

    We propose a new framework called Evolving Systems to describe the self-assembly, or autonomous assembly, of actively controlled dynamical subsystems into an Evolved System with a higher purpose. Autonomous assembly of large, complex flexible structures in space is a target application for Evolving Systems. A critical requirement for autonomous assembling structures is that they remain stable during and after assembly. The fundamental topic of inheritance of stability, dissipativity, and passivity in Evolving Systems is the primary focus of this research. In this paper, we develop an adaptive key component controller to restore stability in Nonlinear Evolving Systems that would otherwise fail to inherit the stability traits of their components. We provide sufficient conditions for the use of this novel control method and demonstrate its use on an illustrative example.

  6. Selective Attention and Control of Action: Comparative Psychology of an Artificial, Evolved Agent and People

    ERIC Educational Resources Information Center

    Ward, Robert; Ward, Ronnie

    2008-01-01

    This study examined the selective attention abilities of a simple, artificial, evolved agent and considered implications of the agent's performance for theories of selective attention and action. The agent processed two targets in continuous time, catching one and then the other. This task required many cognitive operations, including prioritizing…

  7. Numerical simulation of the control of the three-dimensional transition process in boundary layers

    NASA Technical Reports Server (NTRS)

    Kral, L. D.; Fasel, H. F.

    1990-01-01

    Surface heating techniques to control the three-dimensional laminar-turbulent transition process are numerically investigated for a water boundary layer. The Navier-Stokes and energy equations are solved using a fully implicit finite difference/spectral method. The spatially evolving boundary layer is simulated. Results of both passive and active methods of control are shown for small amplitude two-dimensional and three-dimensional disturbance waves. Control is also applied to the early stages of the secondary instability process using passive or active control techniques.

  8. Evolving neural networks through augmenting topologies.

    PubMed

    Stanley, Kenneth O; Miikkulainen, Risto

    2002-01-01

    An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.

  9. Genetic programming for evolving due-date assignment models in job shop environments.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen

    2014-01-01

    Due-date assignment plays an important role in scheduling systems and strongly influences the delivery performance of job shops. Because of the stochastic and dynamic nature of job shops, the development of general due-date assignment models (DDAMs) is complicated. In this study, two genetic programming (GP) methods are proposed to evolve DDAMs for job shop environments. The experimental results show that the evolved DDAMs can make more accurate estimates than other existing dynamic DDAMs with promising reusability. In addition, the evolved operation-based DDAMs show better performance than the evolved DDAMs employing aggregate information of jobs and machines.

  10. The evolving quality of frictional contact with graphene.

    PubMed

    Li, Suzhi; Li, Qunyang; Carpick, Robert W; Gumbsch, Peter; Liu, Xin Z; Ding, Xiangdong; Sun, Jun; Li, Ju

    2016-11-24

    -slip behaviour. While the quantity of atomic-scale contacts (true contact area) evolves, the quality (in this case, the local pinning state of individual atoms and the overall commensurability) also evolves in frictional sliding on graphene. Moreover, the effects can be tuned by pre-wrinkling. The evolving contact quality is critical for explaining the time-dependent friction of configurationally flexible interfaces.

  11. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  12. Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Frederick, J. M.; Hammond, G. E.

    2017-12-01

    Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such

  13. Partitioning the Fitness Components of RNA Populations Evolving In Vitro

    PubMed Central

    Díaz Arenas, Carolina; Lehman, Niles

    2013-01-01

    All individuals in an evolving population compete for resources, and their performance is measured by a fitness metric. The performance of the individuals is relative to their abilities and to the biotic surroundings – the conditions under which they are competing – and involves many components. Molecules evolving in a test tube can also face complex environments and dynamics, and their fitness measurements should reflect the complexity of various contributing factors as well. Here, the fitnesses of a set of ligase ribozymes evolved by the continuous in vitro evolution system were measured. During these evolution cycles there are three different catalytic steps, ligation, reverse transcription, and forward transcription, each with a potential differential influence on the total fitness of each ligase. For six distinct ligase ribozyme genotypes that resulted from continuous evolution experiments, the rates of reaction were measured for each catalytic step by tracking the kinetics of enzymes reacting with their substrates. The reaction products were analyzed for the amount of product formed per time. Each catalytic step of the evolution cycle was found to have a differential incidence in the total fitness of the ligases, and therefore the total fitness of any ligase cannot be inferred from only one catalytic step of the evolution cycle. Generally, the ribozyme-directed ligation step tends to impart the largest effect on overall fitness. Yet it was found that the ligase genotypes have different absolute fitness values, and that they exploit different stages of the overall cycle to gain a net advantage. This is a new example of molecular niche partitioning that may allow for coexistence of more than one species in a population. The dissection of molecular events into multiple components of fitness provides new insights into molecular evolutionary studies in the laboratory, and has the potential to explain heretofore counterintuitive findings. PMID:24391957

  14. Differential Scanning Calorimetry and Evolved Gas Analysis at Mars Ambient Conditions Using the Thermal Evolved Gas Analyzer (TEGA)

    NASA Technical Reports Server (NTRS)

    Musselwhite, D. S.; Boynton, W. V.; Ming, Douglas W.; Quadlander, G.; Kerry, K. E.; Bode, R. C.; Bailey, S. H.; Ward, M. G.; Pathare, A. V.; Lorenz, R. D.

    2000-01-01

    Differential Scanning Calorimetry (DSC) combined with evolved gas analysis (EGA) is a well developed technique for the analysis of a wide variety of sample types with broad application in material and soil sciences. However, the use of the technique for samples under conditions of pressure and temperature as found on other planets is one of current C development and cutting edge research. The Thermal Evolved Gas Analyzer (MGA), which was designed, built and tested at the University of Arizona's Lunar and Planetary Lab (LPL), utilizes DSC/EGA. TEGA, which was sent to Mars on the ill-fated Mars Polar Lander, was to be the first application of DSC/EGA on the surface of Mars as well as the first direct measurement of the volatile-bearing mineralogy in martian soil.

  15. Work Optimization Predicts Accretionary Faulting: An Integration of Physical and Numerical Experiments

    NASA Astrophysics Data System (ADS)

    McBeck, Jessica A.; Cooke, Michele L.; Herbert, Justin W.; Maillot, Bertrand; Souloumiac, Pauline

    2017-09-01

    We employ work optimization to predict the geometry of frontal thrusts at two stages of an evolving physical accretion experiment. Faults that produce the largest gains in efficiency, or change in external work per new fault area, ΔWext/ΔA, are considered most likely to develop. The predicted thrust geometry matches within 1 mm of the observed position and within a few degrees of the observed fault dip, for both the first forethrust and backthrust when the observed forethrust is active. The positions of the second backthrust and forethrust that produce >90% of the maximum ΔWext/ΔA also overlap the observed thrusts. The work optimal fault dips are within a few degrees of the fault dips that maximize the average Coulomb stress. Slip gradients along the detachment produce local elevated shear stresses and high strain energy density regions that promote thrust initiation near the detachment. The mechanical efficiency (Wext) of the system decreases at each of the two simulated stages of faulting and resembles the evolution of experimental force. The higher ΔWext/ΔA due to the development of the first pair relative to the second pair indicates that the development of new thrusts may lead to diminishing efficiency gains as the wedge evolves. The numerical estimates of work consumed by fault propagation overlap the range calculated from experimental force data and crustal faults. The integration of numerical and physical experiments provides a powerful approach that demonstrates the utility of work optimization to predict the development of faults.

  16. Final five-year clinical outcomes in the EVOLVE trial: a randomised evaluation of a novel bioabsorbable polymer-coated, everolimus-eluting stent.

    PubMed

    Meredith, Ian T; Verheye, Stefan; Dubois, Christophe; Dens, Joseph; Farah, Bruno; Carrié, Didier; Walsh, Simon; Oldroyd, Keith; Varenne, Olivier; El-Jack, Seif; Moreno, Raul; Christen, Thomas; Allocco, Dominic J

    2018-04-20

    Long-term data on bioabsorbable polymer-coated everolimus-eluting stents (BP-EES) are limited. The EVOLVE trial compared the safety and efficacy of two dose formulations of the SYNERGY BP-EES with the permanent polymer-coated PROMUS Element EES (PE). The EVOLVE study was a prospective, multicentre, non-inferiority trial that randomised 291 patients with de novo coronary lesions (length: ≤28 mm; diameter: ≥2.25 to ≤3.5 mm) to receive PE (n=98), SYNERGY (n=94), or SYNERGY half-dose (n=99). At five years, there were no significant differences in the rates of TLF or individual components between groups. TLR rates trended lower in both SYNERGY arms than in the PE arm (TLR: 1.1% SYNERGY and 1.0% SYNERGY half-dose vs. 6.1% PE; p=0.07 and p=0.06, respectively). TVR was numerically lower in the SYNERGY arms compared to the PE arm (TVR: 3.3% SYNERGY and 4.2% SYNERGY half-dose vs. 10.2% PE; p=0.06 and p=0.11, respectively). No incidence of stent thrombosis was reported in any arm up to five years. The EVOLVE trial represents the longest-term follow-up of the SYNERGY stent available to date, demonstrating its continued safety and efficacy for the treatment of selected de novo atherosclerotic lesions up to five years.

  17. Exploring Evolving Media Discourse Through Event Cueing.

    PubMed

    Lu, Yafeng; Steptoe, Michael; Burke, Sarah; Wang, Hong; Tsai, Jiun-Yi; Davulcu, Hasan; Montgomery, Douglas; Corman, Steven R; Maciejewski, Ross

    2016-01-01

    Online news, microblogs and other media documents all contain valuable insight regarding events and responses to events. Underlying these documents is the concept of framing, a process in which communicators act (consciously or unconsciously) to construct a point of view that encourages facts to be interpreted by others in a particular manner. As media discourse evolves, how topics and documents are framed can undergo change, shifting the discussion to different viewpoints or rhetoric. What causes these shifts can be difficult to determine directly; however, by linking secondary datasets and enabling visual exploration, we can enhance the hypothesis generation process. In this paper, we present a visual analytics framework for event cueing using media data. As discourse develops over time, our framework applies a time series intervention model which tests to see if the level of framing is different before or after a given date. If the model indicates that the times before and after are statistically significantly different, this cues an analyst to explore related datasets to help enhance their understanding of what (if any) events may have triggered these changes in discourse. Our framework consists of entity extraction and sentiment analysis as lenses for data exploration and uses two different models for intervention analysis. To demonstrate the usage of our framework, we present a case study on exploring potential relationships between climate change framing and conflicts in Africa.

  18. Evolving gene regulation networks into cellular networks guiding adaptive behavior: an outline how single cells could have evolved into a centralized neurosensory system

    PubMed Central

    Fritzsch, Bernd; Jahan, Israt; Pan, Ning; Elliott, Karen L.

    2014-01-01

    Understanding the evolution of the neurosensory system of man, able to reflect on its own origin, is one of the major goals of comparative neurobiology. Details of the origin of neurosensory cells, their aggregation into central nervous systems and associated sensory organs, their localized patterning into remarkably different cell types aggregated into variably sized parts of the central nervous system begin to emerge. Insights at the cellular and molecular level begin to shed some light on the evolution of neurosensory cells, partially covered in this review. Molecular evidence suggests that high mobility group (HMG) proteins of pre-metazoans evolved into the definitive Sox [SRY (sex determining region Y)-box] genes used for neurosensory precursor specification in metazoans. Likewise, pre-metazoan basic helix-loop-helix (bHLH) genes evolved in metazoans into the group A bHLH genes dedicated to neurosensory differentiation in bilaterians. Available evidence suggests that the Sox and bHLH genes evolved a cross-regulatory network able to synchronize expansion of precursor populations and their subsequent differentiation into novel parts of the brain or sensory organs. Molecular evidence suggests metazoans evolved patterning gene networks early and not dedicated to neuronal development. Only later in evolution were these patterning gene networks tied into the increasing complexity of diffusible factors, many of which were already present in pre-metazoans, to drive local patterning events. It appears that the evolving molecular basis of neurosensory cell development may have led, in interaction with differentially expressed patterning genes, to local network modifications guiding unique specializations of neurosensory cells into sensory organs and various areas of the central nervous system. PMID:25416504

  19. Evolving gene regulatory networks into cellular networks guiding adaptive behavior: an outline how single cells could have evolved into a centralized neurosensory system.

    PubMed

    Fritzsch, Bernd; Jahan, Israt; Pan, Ning; Elliott, Karen L

    2015-01-01

    Understanding the evolution of the neurosensory system of man, able to reflect on its own origin, is one of the major goals of comparative neurobiology. Details of the origin of neurosensory cells, their aggregation into central nervous systems and associated sensory organs and their localized patterning leading to remarkably different cell types aggregated into variably sized parts of the central nervous system have begun to emerge. Insights at the cellular and molecular level have begun to shed some light on the evolution of neurosensory cells, partially covered in this review. Molecular evidence suggests that high mobility group (HMG) proteins of pre-metazoans evolved into the definitive Sox [SRY (sex determining region Y)-box] genes used for neurosensory precursor specification in metazoans. Likewise, pre-metazoan basic helix-loop-helix (bHLH) genes evolved in metazoans into the group A bHLH genes dedicated to neurosensory differentiation in bilaterians. Available evidence suggests that the Sox and bHLH genes evolved a cross-regulatory network able to synchronize expansion of precursor populations and their subsequent differentiation into novel parts of the brain or sensory organs. Molecular evidence suggests metazoans evolved patterning gene networks early, which were not dedicated to neuronal development. Only later in evolution were these patterning gene networks tied into the increasing complexity of diffusible factors, many of which were already present in pre-metazoans, to drive local patterning events. It appears that the evolving molecular basis of neurosensory cell development may have led, in interaction with differentially expressed patterning genes, to local network modifications guiding unique specializations of neurosensory cells into sensory organs and various areas of the central nervous system.

  20. Time: The Biggest Pattern in Natural History Research

    NASA Astrophysics Data System (ADS)

    Gontier, Nathalie

    2016-10-01

    We distinguish between four cosmological transitions in the history of Western intellectual thought, and focus on how these cosmologies differentially define matter, space and time. We demonstrate that how time is conceptualized significantly impacts a cosmology's notion on causality, and hone in on how time is conceptualized differentially in modern physics and evolutionary biology. The former conflates time with space into a single space-time continuum and focuses instead on the movement of matter, while the evolutionary sciences have a tradition to understand time as a given when they cartography how organisms change across generations over or in time, thereby proving the phenomenon of evolution. The gap becomes more fundamental when we take into account that phenomena studied by chrono-biologists demonstrate that numerous organisms, including humans, have evolved a "sense" of time. And micro-evolutionary/genetic, meso-evolutionary/developmental and macro-evolutionary phenomena including speciation and extinction not only occur by different evolutionary modes and at different rates, they are also timely phenomena that follow different periodicities. This article focusses on delineating the problem by finding its historical roots. We conclude that though time might be an obsolete concept for the physical sciences, it is crucial for the evolutionary sciences where evolution is defined as the change that biological individuals undergo in/over or through time.

  1. Evolving MEMS Resonator Designs for Fabrication

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Kraus, William F.; Lohn, Jason D.

    2008-01-01

    Because of their small size and high reliability, microelectromechanical (MEMS) devices have the potential to revolution many areas of engineering. As with conventionally-sized engineering design, there is likely to be a demand for the automated design of MEMS devices. This paper describes our current status as we progress toward our ultimate goal of using an evolutionary algorithm and a generative representation to produce designs of a MEMS device and successfully demonstrate its transfer to an actual chip. To produce designs that are likely to transfer to reality, we present two ways to modify evaluation of designs. The first is to add location noise, differences between the actual dimensions of the design and the design blueprint, which is a technique we have used for our work in evolving antennas and robots. The second method is to add prestress to model the warping that occurs during the extreme heat of fabrication. In future we expect to fabricate and test some MEMS resonators that are evolved in this way.

  2. Stability and the Evolvability of Function in a Model Protein

    PubMed Central

    Bloom, Jesse D.; Wilke, Claus O.; Arnold, Frances H.; Adami, Christoph

    2004-01-01

    Functional proteins must fold with some minimal stability to a structure that can perform a biochemical task. Here we use a simple model to investigate the relationship between the stability requirement and the capacity of a protein to evolve the function of binding to a ligand. Although our model contains no built-in tradeoff between stability and function, proteins evolved function more efficiently when the stability requirement was relaxed. Proteins with both high stability and high function evolved more efficiently when the stability requirement was gradually increased than when there was constant selection for high stability. These results show that in our model, the evolution of function is enhanced by allowing proteins to explore sequences corresponding to marginally stable structures, and that it is easier to improve stability while maintaining high function than to improve function while maintaining high stability. Our model also demonstrates that even in the absence of a fundamental biophysical tradeoff between stability and function, the speed with which function can evolve is limited by the stability requirement imposed on the protein. PMID:15111394

  3. A Fast-Evolving, Luminous Transient Discovered by K2/Kepler

    NASA Astrophysics Data System (ADS)

    Rest, Armin; Garnavich, Peter; Khatami, David; Kasen, Daniel; Tucker, Brad; Shaya, Edward; Olling, Robert; Mushotzky, Richard; Zenteno, Alfredo; Margheim, Steven; Strampelli, Giovanni Maria; James, David; Smith, Chris; Forster, Francisco; Villar, Ashley

    2018-01-01

    For decades optical time-domain searches have been tuned to find ordinary supernovae, which rise and fall in brightness over a period of weeks. Recently, supernova searches have improved their cadences and a handful of fast-evolving luminous transients (FELTs) have been identified. FELTs have peak luminosities comparable to type Ia supernovae, but rise to maximum in <10 days and fade from view in <30 days. Here we present the most extreme example of this class thus far, KSN2015K, with a rise time of only 2.2 days and a time above half-maximum of only 6.8 days. Possible energy sources for KSN2015K are the decay of radioactive elements, a central engine powered by accretion/magnetic fields, or hydrodynamic shock. We show that KSN2015K's luminosity makes it unlikely to be powered by radioactive isotopes, and we find that the shock breakout into a dense wind most likely energized the transient.

  4. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  5. Evolving paradigms in multifocal breast cancer.

    PubMed

    Salgado, Roberto; Aftimos, Philippe; Sotiriou, Christos; Desmedt, Christine

    2015-04-01

    The 7th edition of the TNM defines multifocal breast cancer as multiple simultaneous ipsilateral and synchronous breast cancer lesions, provided they are macroscopically distinct and measurable using current traditional pathological and clinical tools. According to the College of American Pathologists (CAP), the characterization of only the largest lesion is considered sufficient, unless the grade and/or histology are different between the lesions. Here, we review three potentially clinically relevant aspects of multifocal breast cancers: first, the importance of a different intrinsic breast cancer subtype of the various lesions; second, the emerging awareness of inter-lesion heterogeneity; and last but not least, the potential introduction of bias in clinical trials due to the unrecognized biological diversity of these cancers. Although the current strategy to assess the lesion with the largest diameter has clearly its advantages in terms of costs and feasibility, this recommendation may not be sustainable in time and might need to be adapted to be compliant with new evolving paradigms in breast cancer. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Evolving Systems: An Outcome of Fondest Hopes and Wildest Dreams

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Balas, Mark J.

    2012-01-01

    New theory is presented for evolving systems, which are autonomously controlled subsystems that self-assemble into a new evolved system with a higher purpose. Evolving systems of aerospace structures often require additional control when assembling to maintain stability during the entire evolution process. This is the concept of Adaptive Key Component Control that operates through one specific component to maintain stability during the evolution. In addition, this control must often overcome persistent disturbances that occur while the evolution is in progress. Theoretical results will be presented for Adaptive Key Component control for persistent disturbance rejection. An illustrative example will demonstrate the Adaptive Key Component controller on a system composed of rigid body and flexible body modes.

  7. Duration and numerical estimation in right brain-damaged patients with and without neglect: Lack of support for a mental time line.

    PubMed

    Masson, Nicolas; Pesenti, Mauro; Dormal, Valérie

    2016-08-01

    Previous studies have shown that left neglect patients are impaired when they have to orient their attention leftward relative to a standard in numerical comparison tasks. This finding has been accounted for by the idea that numerical magnitudes are represented along a spatial continuum oriented from left to right with small magnitudes on the left and large magnitudes on the right. Similarly, it has been proposed that duration could be represented along a mental time line that shares the properties of the number continuum. By comparing directly duration and numerosity processing, this study investigates whether or not the performance of neglect patients supports the hypothesis of a mental time line. Twenty-two right brain-damaged patients (11 with and 11 without left neglect), as well as 11 age-matched healthy controls, had to judge whether a single dot presented visually lasted shorter or longer than 500 ms and whether a sequence of flashed dots was smaller or larger than 5. Digit spans were also assessed to measure verbal working memory capacities. In duration comparison, no spatial-duration bias was found in neglect patients. Moreover, a significant correlation between verbal working memory and duration performance was observed in right brain-damaged patients, irrespective of the presence or absence of neglect. In numerical comparison, only neglect patients showed an enhanced distance effect for numerical magnitude smaller than the standard. These results do not support the hypothesis of the existence of a mental continuum oriented from left to right for duration. We discuss an alternative account to explain the duration impairment observed in right brain-damaged patients. © 2015 The British Psychological Society.

  8. Evolving Organizational Structures in Special Education.

    ERIC Educational Resources Information Center

    McCarthy, Eileen F., Ed.; Sage, Daniel D., Ed.

    The monograph addresses evolving organizational structures in special education from the perspectives of theory and practice. The initial paper, "Issues in Organizational Structure" (D. Sage), focuses on how the multiple units and operations of the special education system should be related and how the management authority and responsibility for…

  9. The Evolving Office of the Registrar

    ERIC Educational Resources Information Center

    Pace, Harold L.

    2011-01-01

    A healthy registrar's office will continue to evolve as it considers student, faculty, and institutional needs; staff talents and expectations; technological opportunities; economic realities; space issues; work environments; and where the strategic plan is taking the institution in support of the mission. Several recognized leaders in the field…

  10. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  11. Differential Scanning Calorimetry and Evolved Gas Analysis at Mars Ambient Conditions Using the Thermal Evolved Gas Analyser (TEGA)

    NASA Technical Reports Server (NTRS)

    Musselwhite, D. S.; Boynton, W. V.; Ming, D. W.; Quadlander, G.; Kerry, K. E.; Bode, R. C.; Bailey, S. H.; Ward, M. G.; Pathare, A. V.; Lorenz, R. D.

    2000-01-01

    Differential Scanning Calorimetry (DSC) combined with evolved gas analysis (EGA) is a well developed technique for the analysis of a wide variety of sample types with broad application in material and soil sciences. However, the use of the technique for samples under conditions of pressure and temperature as found on other planets is one of current development and cutting edge research. The Thermal Evolved Gas Analyzer (TEGA), which was designed, built and tested at the University of Arizona's Lunar and Planetary Lab (LPL), utilizes DSC/EGA. TEGA, which was sent to Mars on the ill-fated Mars Polar Lander, was to be the first application of DSC/EGA on the surface of Mars as well as the first direct measurement of the volatile-bearing mineralogy in martian soil. Additional information is available in the original extended abstract.

  12. Numerical solution of the time fractional reaction-diffusion equation with a moving boundary

    NASA Astrophysics Data System (ADS)

    Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.

    2017-06-01

    A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.

  13. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  14. Cooperative behavior and phase transitions in co-evolving stag hunt game

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, Y. S.; Xu, C.; Hui, P. M.

    2016-02-01

    Cooperative behavior and different phases in a co-evolving network dynamics based on the stag hunt game is studied. The dynamical processes are parameterized by a payoff r that tends to promote non-cooperative behavior and a probability q for a rewiring attempt that could isolate the non-cooperators. The interplay between the parameters leads to different phases. Detailed simulations and a mean field theory are employed to reveal the properties of different phases. For small r, the cooperators are the majority and form a connected cluster while the non-cooperators increase with q but remain isolated over the whole range of q, and it is a static phase. For sufficiently large r, cooperators disappear in an intermediate range qL ≤ q ≤qU and a dynamical all-non-cooperators phase results. For q >qU, a static phase results again. A mean field theory based on how the link densities change in time by the co-evolving dynamics is constructed. The theory gives a phase diagram in the q- r parameter space that is qualitatively in agreement with simulation results. The sources of discrepancies between theory and simulations are discussed.

  15. TIME after TIMED - A perspective on Thermosphere-Ionosphere Mesosphere science and future observational needs after the TIMED mission epoch

    NASA Astrophysics Data System (ADS)

    Mlynczak, M. G.; Russell, J. M., III; Hunt, L. A.; Christensen, A. B.; Paxton, L. J.; Woods, T. N.; Niciejewski, R.; Yee, J. H.

    2016-12-01

    The past 40 years have been a true golden age for space-based observations of the Earth's middle atmosphere (stratosphere to thermosphere). Numerous instruments and missions have been developed and flown to explore the thermal structure, chemical composition, and energy budget of the middle atmosphere. A primary motivation for these observations was the need to understand the photochemistry of stratospheric ozone and its potential depletion by anthropogenic means. As technology evolved, observations were extended higher and higher, into regions previously unobserved from space by optical remote sensing techniques. In the 1990's, NASA initiated the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamcis (TIMED) mission to explore one of the last frontiers of the atmosphere - the region between 60 and 180 km - then referred to as "the ignorosphere." Today, we have 15 years of detailed observations from this remarkable satellite and its 4 instruments, and are recognizing rapid climate change that is occurring above 60 km. The upcoming ICON and GOLD missions will afford new opportunities for scientific discovery by combining data from all three missions. However, it has become clear that continued observations beyond TIMED are required to understand the upper atmosphere as a system that is fully coupled from the edge of Space to the surface of the Earth. In this talk we will review the current status of knowledge of the basic state properties of the thermosphere-ionosphere-mesosphere (TIME) system and will discuss future observations that are required to obtain a comprehensive understanding of the entire TIME system, especially the effects of long term change that are already underway.

  16. Quantum Bose-Hubbard model with an evolving graph as a toy model for emergent spacetime

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Markopoulou, Fotini; Lloyd, Seth; Caravelli, Francesco; Severini, Simone; Markström, Klas

    2010-05-01

    We present a toy model for interacting matter and geometry that explores quantum dynamics in a spin system as a precursor to a quantum theory of gravity. The model has no a priori geometric properties; instead, locality is inferred from the more fundamental notion of interaction between the matter degrees of freedom. The interaction terms are themselves quantum degrees of freedom so that the structure of interactions and hence the resulting local and causal structures are dynamical. The system is a Hubbard model where the graph of the interactions is a set of quantum evolving variables. We show entanglement between spatial and matter degrees of freedom. We study numerically the quantum system and analyze its entanglement dynamics. We analyze the asymptotic behavior of the classical model. Finally, we discuss analogues of trapped surfaces and gravitational attraction in this simple model.

  17. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  18. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  19. The solar exposure time required for vitamin D3 synthesis in the human body estimated by numerical simulation and observation in Japan.

    PubMed

    Miyauchi, Masaatsu; Hirai, Chizuko; Nakajima, Hideaki

    2013-01-01

    Although the importance of solar radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been determined in Japan. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 3.5 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin corresponding to the area of a face and the back of a pair of hands without ingestion from foods. In contrast, it took 76.4 min to produce the same quantity of vitamin D3 at Sapporo in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 22.4 min were required, but 106.0 min were required at 09:00 and 271.3 min were required at 15:00 for the same meteorological conditions. Naha receives high levels of ultraviolet radiation allowing vitamin D3 synthesis almost throughout the year.

  20. Inheritance of evolved resistance to a novel herbicide (pyroxasulfone).

    PubMed

    Busi, Roberto; Gaines, Todd A; Vila-Aiub, Martin M; Powles, Stephen B

    2014-03-01

    Agricultural weeds have rapidly adapted to intensive herbicide selection and resistance to herbicides has evolved within ecological timescales. Yet, the genetic basis of broad-spectrum generalist herbicide resistance is largely unknown. This study aims to determine the genetic control of non-target-site herbicide resistance trait(s) that rapidly evolved under recurrent selection of the novel lipid biosynthesis inhibitor pyroxasulfone in Lolium rigidum. The phenotypic segregation of pyroxasulfone resistance in parental, F1 and back-cross (BC) families was assessed in plants exposed to a gradient of pyroxasulfone doses. The inheritance of resistance to chemically dissimilar herbicides (cross-resistance) was also evaluated. Evolved resistance to the novel selective agent (pyroxasulfone) is explained by Mendelian segregation of one semi-dominant allele incrementally herbicide-selected at higher frequency in the progeny. In BC families, cross-resistance is conferred by an incompletely dominant single major locus. This study confirms that herbicide resistance can rapidly evolve to any novel selective herbicide agents by continuous and repeated herbicide use. The results imply that the combination of herbicide options (rotation, mixtures or combinations) to exploit incomplete dominance can provide acceptable control of broad-spectrum generalist resistance-endowing monogenic traits. Herbicide diversity within a set of integrated management tactics can be one important component to reduce the herbicide selection intensity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  2. LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme

    NASA Technical Reports Server (NTRS)

    Hadjadj, A; Yee, H. C.; Sjogreen, B.

    2011-01-01

    An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).

  3. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  4. Comparison of four measures in reducing length of stay in burns: An Asian centre's evolved multimodal burns protocol.

    PubMed

    Chong, Si Jack; Kok, Yee Onn; Choke, Abby; Tan, Esther W X; Tan, Kok Chai; Tan, Bien-Keem

    2017-09-01

    Multidisciplinary burns care is constantly evolving to improve outcomes given the numerous modalities available. We examine the use of Biobrane, micrografting, early renal replacement therapy and a strict target time of surgery within 24h of burns on improving outcomes of length of stay, duration of surgery, mean number of surgeries and number of positive tissue cultures in a tertiary burns centre. A post-implementation prospective cohort of inpatient burns patients from 2014 to 2015 (n=137) was compared against a similar pre-implementation cohort from 2013 to 2014 (n=93) using REDCAP, an electronic database. There was no statistically significant difference for comorbidities, age and percentage (%) TBSA between the new protocol and control groups. The protocol group had shorter mean time to surgery (23.5-38.5h) (p<0.002), 0.63 fewer operative sessions, shorter mean length of stay (11.8-16.8 days) (p<0.04), less positive tissue cultures (0.59-1.28) (p<0.03). The 4 measures of the new burns protocol improved burns care and validated the collective effort of a multi-disciplinary, multipronged burns management supported by surgeons, anesthetists, renal physicians, emergency physicians, nurses, and allied healthcare providers. Biobrane, single stage onlay micrograft/allograft, early CRRT and surgery within 24h were successfully introduced. These are useful adjuncts in the armamentarium to be considered for any burns centre. Copyright © 2017. Published by Elsevier Ltd.

  5. A screen for immunity genes evolving under positive selection in Drosophila.

    PubMed

    Jiggins, F M; Kim, K W

    2007-05-01

    Genes involved in the immune system tend to have higher rates of adaptive evolution than other genes in the genome, probably because they are coevolving with pathogens. We have screened a sample of Drosophila genes to identify those evolving under positive selection. First, we identified rapidly evolving immunity genes by comparing 140 loci in Drosophila erecta and D. yakuba. Secondly, we resequenced 23 of the fastest evolving genes from the independent species pair D. melanogaster and D. simulans, and identified those under positive selection using a McDonald-Kreitman test. There was strong evidence of adaptive evolution in two serine proteases (persephone and spirit) and a homolog of the Anopheles serpin SRPN6, and weaker evidence in another serine protease and the death domain protein dFADD. These results add to mounting evidence that immune signalling pathway molecules often evolve rapidly, possibly because they are sites of host-parasite coevolution.

  6. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  7. Regulatory mechanisms link phenotypic plasticity to evolvability

    PubMed Central

    van Gestel, Jordi; Weissing, Franz J.

    2016-01-01

    Organisms have a remarkable capacity to respond to environmental change. They can either respond directly, by means of phenotypic plasticity, or they can slowly adapt through evolution. Yet, how phenotypic plasticity links to evolutionary adaptability is largely unknown. Current studies of plasticity tend to adopt a phenomenological reaction norm (RN) approach, which neglects the mechanisms underlying plasticity. Focusing on a concrete question – the optimal timing of bacterial sporulation – we here also consider a mechanistic approach, the evolution of a gene regulatory network (GRN) underlying plasticity. Using individual-based simulations, we compare the RN and GRN approach and find a number of striking differences. Most importantly, the GRN model results in a much higher diversity of responsive strategies than the RN model. We show that each of the evolved strategies is pre-adapted to a unique set of unseen environmental conditions. The regulatory mechanisms that control plasticity therefore critically link phenotypic plasticity to the adaptive potential of biological populations. PMID:27087393

  8. Consensus in evolving networks of mobile agents

    NASA Astrophysics Data System (ADS)

    Baronchelli, Andrea; Díaz-Guilera, Albert

    2012-02-01

    Populations of mobile and communicating agents describe a vast array of technological and natural systems, ranging from sensor networks to animal groups. Here, we investigate how a group-level agreement may emerge in the continuously evolving networks defined by the local interactions of the moving individuals. We adopt a general scheme of motion in two dimensions and we let the individuals interact through the minimal naming game, a prototypical scheme to investigate social consensus. We distinguish different regimes of convergence determined by the emission range of the agents and by their mobility, and we identify the corresponding scaling behaviors of the consensus time. In the same way, we rationalize also the behavior of the maximum memory used during the convergence process, which determines the minimum cognitive/storage capacity needed by the individuals. Overall, we believe that the simple and general model presented in this talk can represent a helpful reference for a better understanding of the behavior of populations of mobile agents.

  9. Perturbation propagation in random and evolved Boolean networks

    NASA Astrophysics Data System (ADS)

    Fretter, Christoph; Szejka, Agnes; Drossel, Barbara

    2009-03-01

    In this paper, we investigate the propagation of perturbations in Boolean networks by evaluating the Derrida plot and its modifications. We show that even small random Boolean networks agree well with the predictions of the annealed approximation, but nonrandom networks show a very different behaviour. We focus on networks that were evolved for high dynamical robustness. The most important conclusion is that the simple distinction between frozen, critical and chaotic networks is no longer useful, since such evolved networks can display the properties of all three types of networks. Furthermore, we evaluate a simplified empirical network and show how its specific state space properties are reflected in the modified Derrida plots.

  10. Evolving the Reuse Process at the Flight Dynamics Division (FDD) Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Condon, S.; Seaman, C.; Basili, Victor; Kraft, S.; Kontio, J.; Kim, Y.

    1996-01-01

    This paper presents the interim results from the Software Engineering Laboratory's (SEL) Reuse Study. The team conducting this study has, over the past few months, been studying the Generalized Support Software (GSS) domain asset library and architecture, and the various processes associated with it. In particular, we have characterized the process used to configure GSS-based attitude ground support systems (AGSS) to support satellite missions at NASA's Goddard Space Flight Center. To do this, we built detailed models of the tasks involved, the people who perform these tasks, and the interdependencies and information flows among these people. These models were based on information gleaned from numerous interviews with people involved in this process at various levels. We also analyzed effort data in order to determine the cost savings in moving from actual development of AGSSs to support each mission (which was necessary before GSS was available) to configuring AGSS software from the domain asset library. While characterizing the GSS process, we became aware of several interesting factors which affect the successful continued use of GSS. Many of these issues fall under the subject of evolving technologies, which were not available at the inception of GSS, but are now. Some of these technologies could be incorporated into the GSS process, thus making the whole asset library more usable. Other technologies are being considered as an alternative to the GSS process altogether. In this paper, we outline some of issues we will be considering in our continued study of GSS and the impact of evolving technologies.

  11. An Evolving Joint Acquisition Force

    DTIC Science & Technology

    2004-03-19

    COVERED - 4. TITLE AND SUBTITLE An Evolving Joint Acquisition Force 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ...Theodore Jennings 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) U.S. Army War...College,Carlisle Barracks,Carlisle,PA,17013-5050 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10

  12. Evolvable Hardware for Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William

    2004-01-01

    This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.

  13. Foundations of children's numerical and mathematical skills: the roles of symbolic and nonsymbolic representations of numerical magnitude.

    PubMed

    Lyons, Ian M; Ansari, Daniel

    2015-01-01

    Numerical and mathematical skills are critical predictors of academic success. The last three decades have seen a substantial growth in our understanding of how the human mind and brain represent and process numbers. In particular, research has shown that we share with animals the ability to represent numerical magnitude (the total number of items in a set) and that preverbal infants can process numerical magnitude. Further research has shown that similar processing signatures characterize numerical magnitude processing across species and developmental time. These findings suggest that an approximate system for nonsymbolic (e.g., dot arrays) numerical magnitude representation serves as the basis for the acquisition of cultural, symbolic (e.g., Arabic numerals) representations of numerical magnitude. This chapter explores this hypothesis by reviewing studies that have examined the relation between individual differences in nonsymbolic numerical magnitude processing and symbolic math abilities (e.g., arithmetic). Furthermore, we examine the extent to which the available literature provides strong evidence for a link between symbolic and nonsymbolic representations of numerical magnitude at the behavioral and neural levels of analysis. We conclude that claims that symbolic number abilities are grounded in the approximate system for the nonsymbolic representation of numerical magnitude are not strongly supported by the available evidence. Alternative models and future research directions are discussed. © 2015 Elsevier Inc. All rights reserved.

  14. Approaches to Numerical Relativity

    NASA Astrophysics Data System (ADS)

    d'Inverno, Ray

    2005-07-01

    Introduction Ray d'Inverno; Preface C. J. S. Clarke; Part I. Theoretical Approaches: 1. Numerical relativity on a transputer array Ray d'Inverno; 2. Some aspects of the characteristic initial value problem in numerical relativity Nigel Bishop; 3. The characteristic initial value problem in general relativity J. M. Stewart; 4. Algebraic approachs to the characteristic initial value problem in general relativity Jõrg Frauendiener; 5. On hyperboidal hypersurfaces Helmut Friedrich; 6. The initial value problem on null cones J. A. Vickers; 7. Introduction to dual-null dynamics S. A. Hayward; 8. On colliding plane wave space-times J. B. Griffiths; 9. Boundary conditions for the momentum constraint Niall O Murchadha; 10. On the choice of matter model in general relativity A. D. Rendall; 11. A mathematical approach to numerical relativity J. W. Barrett; 12. Making sense of the effects of rotation in general relativity J. C. Miller; 13. Stability of charged boson stars and catastrophe theory Franz E. Schunck, Fjodor V. Kusmartsev and Eckehard W. Mielke; Part II. Practical Approaches: 14. Numerical asymptotics R. Gómez and J. Winicour; 15. Instabilities in rapidly rotating polytropes Scott C. Smith and Joan M. Centrella; 16. Gravitational radiation from coalescing binary neutron stars Ken-Ichi Oohara and Takashi Nakamura; 17. 'Critical' behaviour in massless scalar field collapse M. W. Choptuik; 18. Goudunov-type methods applied to general relativistic gravitational collapse José Ma. Ibánez, José Ma. Martí, Juan A. Miralles and J. V. Romero; 19. Astrophysical sources of gravitational waves and neutrinos Silvano Bonazzola, Eric Gourgoulhon, Pawel Haensel and Jean-Alain Marck; 20. Gravitational radiation from triaxial core collapse Jean-Alain Marck and Silvano Bonazzola; 21. A vacuum fully relativistic 3D numerical code C. Bona and J. Massó; 22. Solution of elliptic equations in numerical relativity using multiquadrics M. R. Dubal, S. R. Oliveira and R. A. Matzner; 23

  15. Evolving Educational Techniques in Surgical Training.

    PubMed

    Evans, Charity H; Schenarts, Kimberly D

    2016-02-01

    Training competent and professional surgeons efficiently and effectively requires innovation and modernization of educational methods. Today's medical learner is quite adept at using multiple platforms to gain information, providing surgical educators with numerous innovative avenues to promote learning. With the growth of technology, and the restriction of work hours in surgical education, there has been an increase in use of simulation, including virtual reality, robotics, telemedicine, and gaming. The use of simulation has shifted the learning of basic surgical skills to the laboratory, reserving limited time in the operating room for the acquisition of complex surgical skills". Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  17. Revealing evolved massive stars with Spitzer

    NASA Astrophysics Data System (ADS)

    Gvaramadze, V. V.; Kniazev, A. Y.; Fabrika, S.

    2010-06-01

    Massive evolved stars lose a large fraction of their mass via copious stellar wind or instant outbursts. During certain evolutionary phases, they can be identified by the presence of their circumstellar nebulae. In this paper, we present the results of a search for compact nebulae (reminiscent of circumstellar nebulae around evolved massive stars) using archival 24-μm data obtained with the Multiband Imaging Photometer for Spitzer. We have discovered 115 nebulae, most of which bear a striking resemblance to the circumstellar nebulae associated with luminous blue variables (LBVs) and late WN-type (WNL) Wolf-Rayet (WR) stars in the Milky Way and the Large Magellanic Cloud (LMC). We interpret this similarity as an indication that the central stars of detected nebulae are either LBVs or related evolved massive stars. Our interpretation is supported by follow-up spectroscopy of two dozen of these central stars, most of which turn out to be either candidate LBVs (cLBVs), blue supergiants or WNL stars. We expect that the forthcoming spectroscopy of the remaining objects from our list, accompanied by the spectrophotometric monitoring of the already discovered cLBVs, will further increase the known population of Galactic LBVs. This, in turn, will have profound consequences for better understanding the LBV phenomenon and its role in the transition between hydrogen-burning O stars and helium-burning WR stars. We also report on the detection of an arc-like structure attached to the cLBV HD 326823 and an arc associated with the LBV R99 (HD 269445) in the LMC. Partially based on observations collected at the German-Spanish Astronomical Centre, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC). E-mail: vgvaram@mx.iki.rssi.ru (VVG); akniazev@saao.ac.za (AYK); fabrika@sao.ru (SF)

  18. Interactive numerals

    PubMed Central

    2017-01-01

    Although Arabic numerals (like ‘2016’ and ‘3.14’) are ubiquitous, we show that in interactive computer applications they are often misleading and surprisingly unreliable. We introduce interactive numerals as a new concept and show, like Roman numerals and Arabic numerals, interactive numerals introduce another way of using and thinking about numbers. Properly understanding interactive numerals is essential for all computer applications that involve numerical data entered by users, including finance, medicine, aviation and science. PMID:28484609

  19. Artificial selection on relative brain size in the guppy reveals costs and benefits of evolving a larger brain.

    PubMed

    Kotrschal, Alexander; Rogell, Björn; Bundsen, Andreas; Svensson, Beatrice; Zajitschek, Susanne; Brännström, Ioana; Immler, Simone; Maklakov, Alexei A; Kolm, Niclas

    2013-01-21

    The large variation in brain size that exists in the animal kingdom has been suggested to have evolved through the balance between selective advantages of greater cognitive ability and the prohibitively high energy demands of a larger brain (the "expensive-tissue hypothesis"). Despite over a century of research on the evolution of brain size, empirical support for the trade-off between cognitive ability and energetic costs is based exclusively on correlative evidence, and the theory remains controversial. Here we provide experimental evidence for costs and benefits of increased brain size. We used artificial selection for large and small brain size relative to body size in a live-bearing fish, the guppy (Poecilia reticulata), and found that relative brain size evolved rapidly in response to divergent selection in both sexes. Large-brained females outperformed small-brained females in a numerical learning assay designed to test cognitive ability. Moreover, large-brained lines, especially males, developed smaller guts, as predicted by the expensive-tissue hypothesis, and produced fewer offspring. We propose that the evolution of brain size is mediated by a functional trade-off between increased cognitive ability and reproductive performance and discuss the implications of these findings for vertebrate brain evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Toe-to-hand transfer: Evolving Indications and Relevant Outcomes

    PubMed Central

    Waljee, Jennifer F.; Chung, Kevin C.

    2014-01-01

    In the late 19th century, the first toe to hand transfer was performed in Vienna, Switzerland as a staged procedure by Nicolandi.(1) Since that time, the advent of microsurgery has revolutionized toe to hand transfers. In 1966, Buncke performed the first microvascular toe to thumb transfer in a rhesus monkey.(2) The first toe to thumb transfer using microsurgical techniques in humans was performed by Cobbett in 1969, followed shortly thereafter by the first transfer of a second toe to the thumb position.(3,4) Today, due to expanding microsurgical techniques and surgeon innovation, the indications and techniques for toe-to-hand transfer procedures continue to evolve and now encompass patients with a variety of acquired and congenital hand defects.(5) PMID:23790426

  1. Excel spreadsheet in teaching numerical methods

    NASA Astrophysics Data System (ADS)

    Djamila, Harimi

    2017-09-01

    One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.

  2. Numerical Analysis of Flow Evolution in a Helium Jet Injected into Ambient Air

    NASA Technical Reports Server (NTRS)

    Satti, Rajani P.; Agrawal, Ajay K.

    2005-01-01

    A computational model to study the stability characteristics of an evolving buoyant helium gas jet in ambient air environment is presented. Numerical formulation incorporates a segregated approach to solve for the transport equations of helium mass fraction coupled with the conservation equations of mixture mass and momentum using a staggered grid method. The operating parameters correspond to the Reynolds number varying from 30 to 300 to demarcate the flow dynamics in oscillating and non-oscillating regimes. Computed velocity and concentration fields were used to analyze the flow structure in the evolving jet. For Re=300 case, results showed that an instability mode that sets in during the evolution process in Earth gravity is absent in zero gravity, signifying the importance of buoyancy. Though buoyancy initiates the instability, below a certain jet exit velocity, diffusion dominates the entrainment process to make the jet non-oscillatory as observed for the Re=30 case. Initiation of the instability was found to be dependent on the interaction of buoyancy and momentum forces along the jet shear layer.

  3. VizieR Online Data Catalog: PTPS stars. III. The evolved stars sample (Niedzielski+, 2016)

    NASA Astrophysics Data System (ADS)

    Niedzielski, A.; Deka-Szymankiewicz, B.; Adamczyk, M.; Adamow, M.; Nowak, G.; Wolszczan, A.

    2015-11-01

    We present basic atmospheric parameters (Teff, logg, vt and [Fe/H]), rotation velocities and absolute radial velocities as well as luminosities, masses, ages and radii for 402 stars (including 11 single-lined spectroscopic binaries), mostly subgiants and giants. For 272 of them we present parameters for the first time. For another 53 stars we present estimates of Teff and log g based on photometric calibrations. We also present basic properties of the complete list of 744 stars that form the PTPS evolved stars sample. We examined stellar masses for 1255 stars in five other planet searches and found some of them likely to be significantly overestimated. Applying our uniformly determined stellar masses we confirm the apparent increase of companions masses for evolved stars, and we explain it, as well as lack of close-in planets with limited effective radial velocity precision for those stars due to activity. (5 data files).

  4. Primordial evolvability: Impasses and challenges.

    PubMed

    Vasas, Vera; Fernando, Chrisantha; Szilágyi, András; Zachár, István; Santos, Mauro; Szathmáry, Eörs

    2015-09-21

    While it is generally agreed that some kind of replicating non-living compounds were the precursors of life, there is much debate over their possible chemical nature. Metabolism-first approaches propose that mutually catalytic sets of simple organic molecules could be capable of self-replication and rudimentary chemical evolution. In particular, the graded autocatalysis replication domain (GARD) model, depicting assemblies of amphiphilic molecules, has received considerable interest. The system propagates compositional information across generations and is suggested to be a target of natural selection. However, evolutionary simulations indicate that the system lacks selectability (i.e. selection has negligible effect on the equilibrium concentrations). We elaborate on the lessons learnt from the example of the GARD model and, more widely, on the issue of evolvability, and discuss the implications for similar metabolism-first scenarios. We found that simple incorporation-type chemistry based on non-covalent bonds, as assumed in GARD, is unlikely to result in alternative autocatalytic cycles when catalytic interactions are randomly distributed. An even more serious problem stems from the lognormal distribution of catalytic factors, causing inherent kinetic instability of such loops, due to the dominance of efficiently catalyzed components that fail to return catalytic aid. Accordingly, the dynamics of the GARD model is dominated by strongly catalytic, but not auto-catalytic, molecules. Without effective autocatalysis, stable hereditary propagation is not possible. Many repetitions and different scaling of the model come to no rescue. Despite all attempts to show the contrary, the GARD model is not evolvable, in contrast to reflexively autocatalytic networks, complemented by rare uncatalyzed reactions and compartmentation. The latter networks, resting on the creation and breakage of chemical bonds, can generate novel ('mutant') autocatalytic loops from a given set of

  5. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  6. Numerical approach to model independently reconstruct f (R ) functions through cosmographic data

    NASA Astrophysics Data System (ADS)

    Pizza, Liberato

    2015-06-01

    The challenging issue of determining the correct f (R ) among several possibilities is revised here by means of numerical reconstructions of the modified Friedmann equations around the redshift interval z ∈[0 ,1 ] . Frequently, a severe degeneracy between f (R ) approaches occurs, since different paradigms correctly explain present time dynamics. To set the initial conditions on the f (R ) functions, we involve the use of the so-called cosmography of the Universe, i.e., the technique of fixing constraints on the observable Universe by comparing expanded observables with current data. This powerful approach is essentially model independent, and correspondingly we got a model-independent reconstruction of f (R (z )) classes within the interval z ∈[0 ,1 ]. To allow the Hubble rate to evolve around z ≤1 , we considered three relevant frameworks of effective cosmological dynamics, i.e., the Λ CDM model, the Chevallier-Polarski-Linder parametrization, and a polynomial approach to dark energy. Finally, cumbersome algebra permits passing from f (z ) to f (R ), and the general outcome of our work is the determination of a viable f (R ) function, which effectively describes the observed Universe dynamics.

  7. A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Garrard, Doug; Davis, Milt, Jr.; Cole, Gary

    1999-01-01

    The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.

  8. No apparent cost of evolved immune response in Drosophila melanogaster.

    PubMed

    Gupta, Vanika; Venkatesan, Saudamini; Chatterjee, Martik; Syed, Zeeshan A; Nivsarkar, Vaishnavi; Prasad, Nagaraj G

    2016-04-01

    Maintenance and deployment of the immune system are costly and are hence predicted to trade-off with other resource-demanding traits, such as reproduction. We subjected this longstanding idea to test using laboratory experimental evolution approach. In the present study, replicate populations of Drosophila melanogaster were subjected to three selection regimes-I (Infection with Pseudomonas entomophila), S (Sham-infection with MgSO4 ), and U (Unhandled Control). After 30 generations of selection flies from the I regime had evolved better survivorship upon infection with P. entomophila compared to flies from U and S regimes. However, contrary to expectations and previous reports, we did not find any evidence of trade-offs between immunity and other life history related traits, such as longevity, fecundity, egg hatchability, or development time. After 45 generations of selection, the selection was relaxed for a set of populations. Even after 15 generations, the postinfection survivorship of populations under relaxed selection regime did not decline. We speculate that either there is a negligible cost to the evolved immune response or that trade-offs occur on traits such as reproductive behavior or other immune mechanisms that we have not investigated in this study. Our research suggests that at least under certain conditions, life-history trade-offs might play little role in maintaining variation in immunity. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  9. Thermal and Evolved Gas Analysis of "Nanophase" Carbonates: Implications for Thermal and Evolved Gas Analysis on Mars Missions

    NASA Technical Reports Server (NTRS)

    Lauer, Howard V., Jr.; Archer, P. D., Jr.; Sutter, B.; Niles, P. B.; Ming, Douglas W.

    2012-01-01

    Data collected by the Mars Phoenix Lander's Thermal and Evolved Gas Analyzer (TEGA) suggested the presence of calcium-rich carbonates as indicated by a high temperature CO2 release while a low temperature (approx.400-680 C) CO2 release suggested possible Mg- and/or Fe-carbonates [1,2]. Interpretations of the data collected by Mars remote instruments is done by comparing the mission data to a database on the thermal properties of well-characterized Martian analog materials collected under reduced and Earth ambient pressures [3,4]. We are proposing that "nano-phase" carbonates may also be contributing to the low temperature CO2 release. The objectives of this paper is to (1) characterize the thermal and evolved gas proper-ties of carbonates of varying particle size, (2) evaluate the CO2 releases from CO2 treated CaO samples and (3) examine the secondary CO2 release from reheated calcite of varying particle size.

  10. Advances in numerical and applied mathematics

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)

    1986-01-01

    This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.

  11. Assessment of Evolving TRMM-Based Real-Time Precipitation Estimation Methods and Their Impacts on Hydrologic Prediction in a High-Latitude Basin

    NASA Technical Reports Server (NTRS)

    Yong, Bin; Hong, Yang; Ren, Li-Liang; Gourley, Jonathan; Huffman, George J.; Chen, Xi; Wang, Wen; Khan, Sadiq I.

    2013-01-01

    The real-time availability of satellite-derived precipitation estimates provides hydrologists an opportunity to improve current hydrologic prediction capability for medium to large river basins. Due to the availability of new satellite data and upgrades to the precipitation algorithms, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time estimates (TMPA-RT) have been undergoing several important revisions over the past ten years. In this study, the changes of the relative accuracy and hydrologic potential of TMPA-RT estimates over its three major evolving periods were evaluated and inter-compared at daily, monthly and seasonal scales in the high-latitude Laohahe basin in China. Assessment results show that the performance of TMPA-RT in terms of precipitation estimation and streamflow simulation was significantly improved after 3 February 2005. Overestimation during winter months was noteworthy and consistent, which is suggested to be a consequence from interference of snow cover to the passive microwave retrievals. Rainfall estimated by the new version 6 of TMPA-RT starting from 1 October 2008 to present has higher correlations with independent gauge observations and tends to perform better in detecting rain compared to the prior periods, although it suffers larger mean error and relative bias. After a simple bias correction, this latest dataset of TMPA-RT exhibited the best capability in capturing hydrologic response among the three tested periods. In summary, this study demonstrated that there is an increasing potential in the use of TMPA-RT in hydrologic streamflow simulations over its three algorithm upgrade periods, but still with significant challenges during the winter snowing events.

  12. Evolving Strategies for Cancer and Autoimmunity: Back to the Future

    PubMed Central

    Lane, Peter J. L.; McConnell, Fiona M.; Anderson, Graham; Nawaf, Maher G.; Gaspal, Fabrina M.; Withers, David R.

    2014-01-01

    Although current thinking has focused on genetic variation between individuals and environmental influences as underpinning susceptibility to both autoimmunity and cancer, an alternative view is that human susceptibility to these diseases is a consequence of the way the immune system evolved. It is important to remember that the immunological genes that we inherit and the systems that they control were shaped by the drive for reproductive success rather than for individual survival. It is our view that human susceptibility to autoimmunity and cancer is the evolutionarily acceptable side effect of the immune adaptations that evolved in early placental mammals to accommodate a fundamental change in reproductive strategy. Studies of immune function in mammals show that high affinity antibodies and CD4 memory, along with its regulation, co-evolved with placentation. By dissection of the immunologically active genes and proteins that evolved to regulate this step change in the mammalian immune system, clues have emerged that may reveal ways of de-tuning both effector and regulatory arms of the immune system to abrogate autoimmune responses whilst preserving protection against infection. Paradoxically, it appears that such a detuned and deregulated immune system is much better equipped to mount anti-tumor immune responses against cancers. PMID:24782861

  13. Intelligent reservoir operation system based on evolving artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chaves, Paulo; Chang, Fi-John

    2008-06-01

    We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.

  14. Numerical modeling of field-assisted ion-exchanged channel waveguides by the explicit consideration of space-charge buildup.

    PubMed

    Mrozek, Piotr

    2011-08-01

    A numerical model explicitly considering the space-charge density evolved both under the mask and in the region of optical structure formation was used to predict the profiles of Ag concentration during field-assisted Ag(+)-Na(+) ion exchange channel waveguide fabrication. The influence of the unequal values of diffusion constants and mobilities of incoming and outgoing ions, the value of a correlation factor (Haven ratio), and particularly space-charge density induced during the ion exchange, on the resulting profiles of Ag concentration was analyzed and discussed. It was shown that the incorporation into the numerical model of a small quantity of highly mobile ions other than exclusively Ag(+) and Na(+) may considerably affect the range and shape of calculated Ag profiles in the multicomponent glass. The Poisson equation was used to predict the electric field spread evolution in the glass substrate. The results of the numerical analysis were verified by the experimental data of Ag concentration in a channel waveguide fabricated using a field-assisted process.

  15. Our evolving universe

    NASA Astrophysics Data System (ADS)

    Longair, Malcolm S.

    Our Evolving Universe is a lucid, non-technical and infectiously enthusiastic introduction to current astronomy and cosmology. Highly illustrated throughout with the latest colour images from the world's most advanced telescopes, it also provides a colourful view of our Universe. Malcolm Longair takes us on a breathtaking tour of the most dramatic recent results astronomers have on the birth of stars, the hunt for black holes and dark matter, on gravitational lensing and the latest tests of the Big Bang. He leads the reader right up to understand the key questions that future research in astronomy and cosmology must answer. A clear and comprehensive glossary of technical terms is also provided. For the general reader, student or professional wishing to understand the key questions today's astronomers and cosmologists are trying to answer, this is an invaluable and inspiring read.

  16. How Mentoring Relationships Evolve: A Longitudinal Study of Academic Pediatricians in a Physician Educator Faculty Development Program

    ERIC Educational Resources Information Center

    Balmer, Dorene; D'Alessandro, Donna; Risko, Wanessa; Gusic, Maryellen E.

    2011-01-01

    Introduction: Mentoring is increasingly recognized as central to career development. Less attention has been paid, however, to how mentoring relationships evolve over time. To provide a more complete picture of these complex relationships, the authors explored mentoring from a mentee's perspective within the context of a three-year faculty…

  17. Numerical orbit generators of artificial earth satellites

    NASA Astrophysics Data System (ADS)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  18. Numerical Modeling of the 2014 Oso, Washington, Landslide.

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2014-12-01

    Numerical simulations of alternative scenarios that could have transpired during the Oso, Washington, landslide of 22 March 2014 provide insight into factors responsible for the landslide's devastating high-speed runout.We performed these simulations using D-Claw, a numerical model we recently developed to simulate landslide and debris-flow motion from initiation to deposition. D-Claw solves a hyperbolic system of five partial differential equations that describe simultaneous evolution of the thickness,solid volume fraction, basal pore-fluid pressure, and two components of momentum of the moving mass. D-Claw embodies the concept ofstate-dependent dilatancy, which causes the solid volume fraction m to evolve toward a value that is equilibrated to the ambient stress state andshear rate. As the value of m evolves, basal pore-fluid pressure coevolves,and thereby causes an evolution in frictional resistance to motion. Our Oso simulations considered alternative scenarios in which values of all model parameters except the initial solid volume fraction m0 were held constant.These values were: basal friction angle = 36°; static critical-state solidvolume fraction = 0.64; initial sediment permeability = 10-8 m2; pore-fluid density = 1100 kg/m3; sediment grain density = 2700 kg/m3; pore-fluid viscosity = 0.005 Pa-s; and dimensionless sediment compressibility coefficient = 0.03. Simulations performed using these values and m0 = 0.62 produced widespread landslide liquefaction, runaway acceleration, andlandslide runout distances, patterns and speeds similar to those observed or inferred for the devastating Oso event. Alternative simulations that usedm0 = 0.64 produced a much slower landslide that did not liquefy and that traveled only about 100 m before stopping. This relatively benign behavioris similar to that of several landslides at the Oso site prior to 2014. Our findings illustrate a behavioral bifurcation that is highly sensitive to the initial solid volume fraction

  19. Evidence for a high mutation rate at rapidly evolving yeast centromeres.

    PubMed

    Bensasson, Douda

    2011-07-18

    Although their role in cell division is essential, centromeres evolve rapidly in animals, plants and yeasts. Unlike the complex centromeres of plants and aminals, the point centromeres of Saccharomcyes yeasts can be readily sequenced to distinguish amongst the possible explanations for fast centromere evolution. Using DNA sequences of all 16 centromeres from 34 strains of Saccharomyces cerevisiae and population genomic data from Saccharomyces paradoxus, I show that centromeres in both species evolve 3 times more rapidly even than selectively unconstrained DNA. Exceptionally high levels of polymorphism seen in multiple yeast populations suggest that rapid centromere evolution does not result from the repeated selective sweeps expected under meiotic drive. I further show that there is little evidence for crossing-over or gene conversion within centromeres, although there is clear evidence for recombination in their immediate vicinity. Finally I show that the mutation spectrum at centromeres is consistent with the pattern of spontaneous mutation elsewhere in the genome. These results indicate that rapid centromere evolution is a common phenomenon in yeast species. Furthermore, these results suggest that rapid centromere evolution does not result from the mutagenic effect of gene conversion, but from a generalised increase in the mutation rate, perhaps arising from the unusual chromatin structure at centromeres in yeast and other eukaryotes.

  20. Composite body movements modulate numerical cognition: evidence from the motion-numerical compatibility effect

    PubMed Central

    Cheng, Xiaorong; Ge, Hui; Andoni, Deljfina; Ding, Xianfeng; Fan, Zhao

    2015-01-01

    A recent hierarchical model of numerical processing, initiated by Fischer and Brugger (2011) and Fischer (2012), suggested that situated factors, such as different body postures and body movements, can influence the magnitude representation and bias numerical processing. Indeed, Loetscher et al. (2008) found that participants’ behavior in a random number generation task was biased by head rotations. More small numbers were reported after leftward than rightward head turns, i.e., a motion-numerical compatibility effect. Here, by carrying out two experiments, we explored whether similar motion-numerical compatibility effects exist for movements of other important body components, e.g., arms, and for composite body movements as well, which are basis for complex human activities in many ecologically meaningful situations. In Experiment 1, a motion-numerical compatibility effect was observed for lateral rotations of two body components, i.e., the head and arms. Relatively large numbers were reported after making rightward compared to leftward movements for both lateral head and arm turns. The motion-numerical compatibility effect was observed again in Experiment 2 when participants were asked to perform composite body movements of congruent movement directions, e.g., simultaneous head left turns and arm left turns. However, it disappeared when the movement directions were incongruent, e.g., simultaneous head left turns and arm right turns. Taken together, our results extended Loetscher et al.’s (2008) finding by demonstrating that their effect is effector-general and exists for arm movements. Moreover, our study reveals for the first time that the impact of spatial information on numerical processing induced by each of the two sensorimotor-based situated factors, e.g., a lateral head turn and a lateral arm turn, can cancel each other out. PMID:26594188

  1. Intrinsic immunogenicity of rapidly-degradable polymers evolves during degradation.

    PubMed

    Andorko, James I; Hess, Krystina L; Pineault, Kevin G; Jewell, Christopher M

    2016-03-01

    Recent studies reveal many biomaterial vaccine carriers are able to activate immunostimulatory pathways, even in the absence of other immune signals. How the changing properties of polymers during biodegradation impact this intrinsic immunogenicity is not well studied, yet this information could contribute to rational design of degradable vaccine carriers that help direct immune response. We use degradable poly(beta-amino esters) (PBAEs) to explore intrinsic immunogenicity as a function of the degree of polymer degradation and polymer form (e.g., soluble, particles). PBAE particles condensed by electrostatic interaction to mimic a common vaccine approach strongly activate dendritic cells, drive antigen presentation, and enhance T cell proliferation in the presence of antigen. Polymer molecular weight strongly influences these effects, with maximum stimulation at short degradation times--corresponding to high molecular weight--and waning levels as degradation continues. In contrast, free polymer is immunologically inert. In mice, PBAE particles increase the numbers and activation state of cells in lymph nodes. Mechanistic studies reveal that this evolving immunogenicity occurs as the physicochemical properties and concentration of particles change during polymer degradation. This work confirms the immunological profile of degradable, synthetic polymers can evolve over time and creates an opportunity to leverage this feature in new vaccines. Degradable polymers are increasingly important in vaccination, but how the inherent immunogenicity of polymers changes during degradation is poorly understood. Using common rapidly-degradable vaccine carriers, we show that the activation of immune cells--even in the absence of other adjuvants--depends on polymer form (e.g., free, particulate) and the extent of degradation. These changing characteristics alter the physicochemical properties (e.g., charge, size, molecular weight) of polymer particles, driving changes in

  2. A Hyperbolic Solver for Black Hole Initial Data in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Babiuc, Maria

    2016-03-01

    Numerical relativity is essential to the efforts of detecting gravitational waves emitted at the inspiral and merger of binary black holes. The first requirement for the generation of reliable gravitational wave templates is an accurate method of constructing initial data (ID). The standard approach is to solve the constraint equations for general relativity by formulating them as an elliptic system. A shortcoming of the ID constructed this way is an initial burst of spurious unphysical radiation (junk radiation). Recently, Racz and Winicour formulated the constraints as a hyperbolic problem, requiring boundary conditions only on a large sphere surrounding the system, where the physical behavior of the gravitational field is well understood. We investigate the applicability of this new approach, by developing a new 4th order numerical code that implements the fully nonlinear constraints equations on a two dimensional stereographic foliation, and evolves them radially inward using a Runge-Kutta integrator. The tensorial quantities are written as spin-weighted fields and the angular derivatives are replaced with ``eth'' operators. We present here results for the simulation of nonlinear perturbations to Schwarzschild ID in Kerr-Schild coordinates. The code shows stability and convergence at both large and small radii. Our long-term goal is to develop this new approach into a numerical scheme for generating ID for binary black holes and to analyze its performance in eliminating the junk radiation.

  3. Real-Time Estimation of Volcanic ASH/SO2 Cloud Height from Combined Uv/ir Satellite Observations and Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Vicente, Gilberto A.

    An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.

  4. Real-time decay of a highly excited charge carrier in the one-dimensional Holstein model

    NASA Astrophysics Data System (ADS)

    Dorfner, F.; Vidmar, L.; Brockt, C.; Jeckelmann, E.; Heidrich-Meisner, F.

    2015-03-01

    We study the real-time dynamics of a highly excited charge carrier coupled to quantum phonons via a Holstein-type electron-phonon coupling. This is a prototypical example for the nonequilibrium dynamics in an interacting many-body system where excess energy is transferred from electronic to phononic degrees of freedom. We use diagonalization in a limited functional space (LFS) to study the nonequilibrium dynamics on a finite one-dimensional chain. This method agrees with exact diagonalization and the time-evolving block-decimation method, in both the relaxation regime and the long-time stationary state, and among these three methods it is the most efficient and versatile one for this problem. We perform a comprehensive analysis of the time evolution by calculating the electron, phonon and electron-phonon coupling energies, and the electronic momentum distribution function. The numerical results are compared to analytical solutions for short times, for a small hopping amplitude and for a weak electron-phonon coupling. In the latter case, the relaxation dynamics obtained from the Boltzmann equation agrees very well with the LFS data. We also study the time dependence of the eigenstates of the single-site reduced density matrix, which defines the so-called optimal phonon modes. We discuss their structure in nonequilibrium and the distribution of their weights. Our analysis shows that the structure of optimal phonon modes contains very useful information for the interpretation of the numerical data.

  5. The Evolving Understanding of Recovery: What the Sociology of Mental Health has to Offer*

    PubMed Central

    Watson, Dennis P.

    2012-01-01

    The meaning of recovery from serious mental illness (SMI) has evolved over time. Whereas it was not even considered to be a primary goal of treatment thirty years ago, it is the main focus of mental health policy today. These changes are partially the result of the work of sociologists who were studying mental health during the time of institutional treatment and the early stages of community-based care. Despite these early influences, the sociology of mental health has largely overlooked the explicit study of recovery. This is because sociologists began shifting their focus from the study of SMI to the study of less severe mental health problems beginning in 1970s. In this paper I (a) discuss the evolving history of mental health recovery; (b) how recovery is defined today in policy, practice, and research; and (c) present an argument for why sociological perspectives and methods can help shed light on the tensions between the definitions while assisting to develop better understandings of the recovery process. In this argument I place particular attention on qualitative social psychological perspectives and methods because they hold the most potential for addressing some of the central concerns in the area of recovery research. PMID:23483849

  6. On the equivalence of dynamically orthogonal and bi-orthogonal methods: Theory and numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu

    2014-08-01

    The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also themore » stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.« less

  7. The Mars Phoenix Thermal Evolved-Gas Analysis: The Role of an Organic Free Blank in the Search for Organics

    NASA Technical Reports Server (NTRS)

    Lauer, H. V., Jr.; Ming, Douglas W.; Sutter, B.; Golden, D. C.; Morris, Richard V.; Boynton, W. V.

    2008-01-01

    The Thermal Evolved-Gas Analyzer (TEGA) instrument onboard the 2007 Phoenix Lander will perform differential scanning calorimetry (DSC) and evolved-gas analysis of soil samples collected from the surface. Data from the instrument will be compared with Mars analog mineral standards, collected under TEGA Mars-like conditions to identify the volatile-bearing mineral phases [1] (e.g., Fe-oxyhydroxides, phyllosilicates, carbonates, and sulfates) found in the Martian soil. Concurrently, the instrument will be looking for indications of organics that might also be present in the soil. Organic molecules are necessary building blocks for life, although their presence in the ice or soil does not indicate life itself. The spacecraft will certainly bring organic contaminants to Mars even though numerous steps were taken to minimize contamination during the spacecraft assembly and testing. It will be essential to distinguish possible Mars organics from terrestrial contamination when TEGA instrument begins analyzing icy soils. To address the above, an Organic Free Blank (OFB) was designed, built, tested, and mounted on the Phoenix spacecraft providing a baseline for distinguishing Mars organics from terrestrial organic contamination. Our objective in this report is to describe some of the considerations used in selecting the OFB material and then report on the processing and analysis of the final candidate material

  8. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  9. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  10. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  11. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the

  12. Robustness, evolvability, and the logic of genetic regulation.

    PubMed

    Payne, Joshua L; Moore, Jason H; Wagner, Andreas

    2014-01-01

    In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene's cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: For the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield identical gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, so that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype.

  13. Robustness, Evolvability, and the Logic of Genetic Regulation

    PubMed Central

    Moore, Jason H.; Wagner, Andreas

    2014-01-01

    In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene’s cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: for the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield idential gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, such that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype. PMID:23373974

  14. Fractional Diffusion Processes: Probability Distributions and Continuous Time Random Walk

    NASA Astrophysics Data System (ADS)

    Gorenflo, R.; Mainardi, F.

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By the space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order alpha in (0,2] and skewness theta (\\verttheta\\vertlemin \\{alpha ,2-alpha \\}), and the first-order time derivative with a Caputo derivative of order beta in (0,1] . The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process. We view it as a generalized diffusion process that we call fractional diffusion process, and present an integral representation of the fundamental solution. A more general approach to anomalous diffusion is however known to be provided by the master equation for a continuous time random walk (CTRW). We show how this equation reduces to our fractional diffusion equation by a properly scaled passage to the limit of compressed waiting times and jump widths. Finally, we describe a method of simulation and display (via graphics) results of a few numerical case studies.

  15. Sexual regret: evidence for evolved sex differences.

    PubMed

    Galperin, Andrew; Haselton, Martie G; Frederick, David A; Poore, Joshua; von Hippel, William; Buss, David M; Gonzaga, Gian C

    2013-10-01

    Regret and anticipated regret enhance decision quality by helping people avoid making and repeating mistakes. Some of people's most intense regrets concern sexual decisions. We hypothesized evolved sex differences in women's and men's experiences of sexual regret. Because of women's higher obligatory costs of reproduction throughout evolutionary history, we hypothesized that sexual actions, particularly those involving casual sex, would be regretted more intensely by women than by men. In contrast, because missed sexual opportunities historically carried higher reproductive fitness costs for men than for women, we hypothesized that poorly chosen sexual inactions would be regretted more by men than by women. Across three studies (Ns = 200, 395, and 24,230), we tested these hypotheses using free responses, written scenarios, detailed checklists, and Internet sampling to achieve participant diversity, including diversity in sexual orientation. Across all data sources, results supported predicted psychological sex differences and these differences were localized in casual sex contexts. These findings are consistent with the notion that the psychology of sexual regret was shaped by recurrent sex differences in selection pressures operating over deep time.

  16. A Riccati solution for the ideal MHD plasma response with applications to real-time stability control

    NASA Astrophysics Data System (ADS)

    Glasser, Alexander; Kolemen, Egemen; Glasser, A. H.

    2018-03-01

    Active feedback control of ideal MHD stability in a tokamak requires rapid plasma stability analysis. Toward this end, we reformulate the δW stability method with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the generic tokamak ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD matrix Riccati differential equation. Since Riccati equations are prevalent in the control theory literature, such a shift in perspective brings to bear a range of numerical methods that are well-suited to the robust, fast solution of control problems. We discuss the usefulness of Riccati techniques in solving the stiff ordinary differential equations often encountered in ideal MHD stability analyses—for example, in tokamak edge and stellarator physics. We demonstrate the applicability of such methods to an existing 2D ideal MHD stability code—DCON [A. H. Glasser, Phys. Plasmas 23, 072505 (2016)]—enabling its parallel operation in near real-time, with wall-clock time ≪1 s . Such speed may help enable active feedback ideal MHD stability control, especially in tokamak plasmas whose ideal MHD equilibria evolve with inductive timescale τ≳ 1s—as in ITER.

  17. An evolving model for the lodging-service network in a tourism destination

    NASA Astrophysics Data System (ADS)

    Hernández, Juan M.; González-Martel, Christian

    2017-09-01

    Tourism is a complex dynamic system including multiple actors which are related each other composing an evolving social network. This paper presents a growing model that explains how part of the supply components in a tourism system forms a social network. Specifically, the lodgings and services in a destination are the network nodes and a link between them appears if a representative tourist hosted in the lodging visits/consumes the service during his/her stay. The specific link between both categories are determined by a random and preferential attachment rule. The analytic results show that the long-term degree distribution of services follows a shifted power-law distribution. The numerical simulations show slight disagreements with the theoretical results in the case of the one-mode degree distribution of services, due to the low order of convergence to zero of X-motifs. The model predictions are compared with real data coming from a popular tourist destination in Gran Canaria, Spain, showing a good agreement between analytical and empirical data for the degree distribution of services. The theoretical model was validated assuming four type of perturbations in the real data.

  18. A numerical study of attraction/repulsion collective behavior models: 3D particle analyses and 1D kinetic simulations

    NASA Astrophysics Data System (ADS)

    Vecil, Francesco; Lafitte, Pauline; Rosado Linares, Jesús

    2013-10-01

    We study at particle and kinetic level a collective behavior model based on three phenomena: self-propulsion, friction (Rayleigh effect) and an attractive/repulsive (Morse) potential rescaled so that the total mass of the system remains constant independently of the number of particles N. In the first part of the paper, we introduce the particle model: the agents are numbered and described by their position and velocity. We identify five parameters that govern the possible asymptotic states for this system (clumps, spheres, dispersion, mills, rigid-body rotation, flocks) and perform a numerical analysis on the 3D setting. Then, in the second part of the paper, we describe the kinetic system derived as the limit from the particle model as N tends to infinity; we propose, in 1D, a numerical scheme for the simulations, and perform a numerical analysis devoted to trying to recover asymptotically patterns similar to those emerging for the equivalent particle systems, when particles originally evolved on a circle.

  19. Numerical optix: A time-domain simulator of fluorescent light diffusion in turbid medium

    NASA Astrophysics Data System (ADS)

    Ma, Guobin; Delorme, Jean-François; Guilman, Olga; Leblond, Frédéric; Khayat, Mario

    2007-02-01

    The interest in fluorescence imaging has increased steadily in the last decade. Using fluorescence techniques, it is feasible to visualize and quantify the function of genes and the expression of enzymes and proteins deep inside tissues. When applied to small animal research, optical imaging based on fluorescent marker probes can provide valuable information on the specificity and efficacy of drugs at reduced cost and with greater efficiency. Meanwhile, fluorescence techniques represent an important class of optical methods being applied to in vitro and in vivo biomedical diagnostics, towards noninvasive clinical applications, such as detecting and monitoring specific pathological and physiological processes. ART has developed a time domain in vivo small animal fluorescence imaging system, eXplore Optix. Using the measured time-resolved fluorescence signal, fluorophore location and concentration can be quickly estimated. Furthermore, the 3D distribution of fluorophore can be obtained by fluorescent diffusion tomography. To accurately analyze and interpret the measured fluorescent signals from tissue, complex theoretical models and algorithms are employed. We present here a numerical simulator of eXplore Optix. It generates virtual data under well-controlled conditions that enable us to test, verify, and improve our models and algorithms piecewise separately. The theoretical frame of the simulator is an analytical solution of the fluorescence diffusion equation. Compared to existing models, the coupling of fluorophores with finite volume size is taken into consideration. Also, the influences of fluorescent inclusions to excitation and emission light are both accounted for. The output results are compared to Monte-Carlo simulations.

  20. Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics

    NASA Astrophysics Data System (ADS)

    d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.

    2018-05-01

    Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.

  1. Direct numerical simulation of transitional and turbulent flow over a heated flat plate using finite-difference schemes

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    1995-01-01

    This report deals with the direct numerical simulation of transitional and turbulent flow at low Mach numbers using high-order-accurate finite-difference techniques. A computation of transition to turbulence of the spatially-evolving boundary layer on a heated flat plate in the presence of relatively high freestream turbulence was performed. The geometry and flow conditions were chosen to match earlier experiments. The development of the momentum and thermal boundary layers was documented. Velocity and temperature profiles, as well as distributions of skin friction, surface heat transfer rate, Reynolds shear stress, and turbulent heat flux, were shown to compare well with experiment. The results indicate that the essential features of the transition process have been captured. The numerical method used here can be applied to complex geometries in a straightforward manner.

  2. The 'E' factor -- evolving endodontics.

    PubMed

    Hunter, M J

    2013-03-01

    Endodontics is a constantly developing field, with new instruments, preparation techniques and sealants competing with trusted and traditional approaches to tooth restoration. Thus general dental practitioners must question and understand the significance of these developments before adopting new practices. In view of this, the aim of this article, and the associated presentation at the 2013 British Dental Conference & Exhibition, is to provide an overview of endodontic methods and constantly evolving best practice. The presentation will review current preparation techniques, comparing rotary versus reciprocation, and question current trends in restoration of the endodontically treated tooth.

  3. Salt tolerance evolves more frequently in C4 grass lineages.

    PubMed

    Bromham, L; Bennett, T H

    2014-03-01

    Salt tolerance has evolved many times in the grass family, and yet few cereal crops are salt tolerant. Why has it been so difficult to develop crops tolerant of saline soils when salt tolerance has evolved so frequently in nature? One possible explanation is that some grass lineages have traits that predispose them to developing salt tolerance and that without these background traits, salt tolerance is harder to achieve. One candidate background trait is photosynthetic pathway, which has also been remarkably labile in grasses. At least 22 independent origins of the C4 photosynthetic pathway have been suggested to occur within the grass family. It is possible that the evolution of C4 photosynthesis aids exploitation of saline environments, because it reduces transpiration, increases water-use efficiency and limits the uptake of toxic ions. But the observed link between the evolution of C4 photosynthesis and salt tolerance could simply be due to biases in phylogenetic distribution of halophytes or C4 species. Here, we use a phylogenetic analysis to investigate the association between photosynthetic pathway and salt tolerance in the grass family Poaceae. We find that salt tolerance is significantly more likely to occur in lineages with C4 photosynthesis than in C3 lineages. We discuss the possible links between C4 photosynthesis and salt tolerance and consider the limitations of inferring the direction of causality of this relationship. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  4. A group evolving-based framework with perturbations for link prediction

    NASA Astrophysics Data System (ADS)

    Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin

    2017-06-01

    Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.

  5. Improving BeiDou real-time precise point positioning with numerical weather models

    NASA Astrophysics Data System (ADS)

    Lu, Cuixian; Li, Xingxing; Zus, Florian; Heinkelmann, Robert; Dick, Galina; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2017-09-01

    Precise positioning with the current Chinese BeiDou Navigation Satellite System is proven to be of comparable accuracy to the Global Positioning System, which is at centimeter level for the horizontal components and sub-decimeter level for the vertical component. But the BeiDou precise point positioning (PPP) shows its limitation in requiring a relatively long convergence time. In this study, we develop a numerical weather model (NWM) augmented PPP processing algorithm to improve BeiDou precise positioning. Tropospheric delay parameters, i.e., zenith delays, mapping functions, and horizontal delay gradients, derived from short-range forecasts from the Global Forecast System of the National Centers for Environmental Prediction (NCEP) are applied into BeiDou real-time PPP. Observational data from stations that are capable of tracking the BeiDou constellation from the International GNSS Service (IGS) Multi-GNSS Experiments network are processed, with the introduced NWM-augmented PPP and the standard PPP processing. The accuracy of tropospheric delays derived from NCEP is assessed against with the IGS final tropospheric delay products. The positioning results show that an improvement in convergence time up to 60.0 and 66.7% for the east and vertical components, respectively, can be achieved with the NWM-augmented PPP solution compared to the standard PPP solutions, while only slight improvement in the solution convergence can be found for the north component. A positioning accuracy of 5.7 and 5.9 cm for the east component is achieved with the standard PPP that estimates gradients and the one that estimates no gradients, respectively, in comparison to 3.5 cm of the NWM-augmented PPP, showing an improvement of 38.6 and 40.1%. Compared to the accuracy of 3.7 and 4.1 cm for the north component derived from the two standard PPP solutions, the one of the NWM-augmented PPP solution is improved to 2.0 cm, by about 45.9 and 51.2%. The positioning accuracy for the up component

  6. Supercontinents, mantle dynamics and plate tectonics: A perspective based on conceptual vs. numerical models

    NASA Astrophysics Data System (ADS)

    Yoshida, Masaki; Santosh, M.

    2011-03-01

    assembly which erodes the continental crust. Ongoing subduction erosion also occurs at the leading edges of dispersing plates, which also contributes to crustal destruction, although this is only a temporary process. The previous numerical studies of mantle convection suggested that there is a significant feedback between mantle convection and continental drift. The process of assembly of supercontinents induces a temperature increase beneath the supercontinent due to the thermal insulating effect. Such thermal insulation leads to a planetary-scale reorganization of mantle flow and results in longest-wavelength thermal heterogeneity in the mantle, i.e., degree-one convection in three-dimensional spherical geometry. The formation of degree-one convection seems to be integral to the emergence of periodic supercontinent cycles. The rifting and breakup of supercontinental assemblies may be caused by either tensional stress due to the thermal insulating effect, or large-scale partial melting resulting from the flow reorganization and consequent temperature increase beneath the supercontinent. Supercontinent breakup has also been correlated with the temperature increase due to upwelling plumes originating from the deeper lower mantle or CMB as a return flow of plate subduction occurring at supercontinental margins. The active mantle plumes from the CMB may disrupt the regularity of supercontinent cycles. Two end-member scenarios can be envisaged for the mantle convection cycle. One is that mantle convection with dispersing continental blocks has a short-wavelength structure, or close to degree-two structure as the present Earth, and when a supercontinent forms, mantle convection evolves into degree-one structure. Another is that mantle convection with dispersing continental blocks has a degree-one structure, and when a supercontinent forms, mantle convection evolves into degree-two structure. In the case of the former model, it would take longer time to form a supercontinent

  7. Eruption cycles in a basaltic andesite system: insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Smekens, J. F.; Clarke, A. B.; De'Michieli Vitturi, M.

    2015-12-01

    Persistently active explosive volcanoes are characterized by short explosive bursts, which often occur at periodic intervals numerous times per day, spanning years to decades. Many of these systems present relatively evolved compositions (andesite to rhyolite), and their cyclic activity has been the subject of extensive work (e.g., Soufriere Hills Volcano, Montserrat). However, the same periodic behavior can also be observed at open systems of more mafic compositions, such as Semeru in Indonesia or Karymsky in Kamchatka for example. In this work, we use DOMEFLOW, a 1D transient numerical model of magma ascent, to identify the conditions that lead to and control periodic eruptions in basaltic andesite systems, where the viscosity of the liquid phase can be drastically lower. Periodic behavior occurs for a very narrow range of conditions, for which the mass balance between magma flux and open-system gas escape repeatedly generates a viscous plug, pressurizes the magma beneath the plug, and then explosively disrupts it. The characteristic timescale and magnitude of the eruptive cycles are controlled by the overall viscosity of the magmatic mixture, with higher viscosities leading to longer cycles and lower flow rates at the top of the conduit. Cyclic eruptions in basaltic andesite systems are observed for higher crystal contents, smaller conduit radii, and over a wider range of chamber pressures than the andesitic system, all of which are the direct consequence of a decrease in viscosity of the melt phase, and in turn in the intensity of the viscous forces generated by the system. Results suggest that periodicity can exist in more mafic systems with relatively lower chamber pressures than andesite and rhyolite systems, and may explain why more mafic magmas sometimes remain active for decades.

  8. Quantum games on evolving random networks

    NASA Astrophysics Data System (ADS)

    Pawela, Łukasz

    2016-09-01

    We study the advantages of quantum strategies in evolutionary social dilemmas on evolving random networks. We focus our study on the two-player games: prisoner's dilemma, snowdrift and stag-hunt games. The obtained result show the benefits of quantum strategies for the prisoner's dilemma game. For the other two games, we obtain regions of parameters where the quantum strategies dominate, as well as regions where the classical strategies coexist.

  9. Closed loop deep brain stimulation: an evolving technology.

    PubMed

    Hosain, Md Kamal; Kouzani, Abbas; Tye, Susannah

    2014-12-01

    Deep brain stimulation is an effective and safe medical treatment for a variety of neurological and psychiatric disorders including Parkinson's disease, essential tremor, dystonia, and treatment resistant obsessive compulsive disorder. A closed loop deep brain stimulation (CLDBS) system automatically adjusts stimulation parameters by the brain response in real time. The CLDBS continues to evolve due to the advancement in the brain stimulation technologies. This paper provides a study on the existing systems developed for CLDBS. It highlights the issues associated with CLDBS systems including feedback signal recording and processing, stimulation parameters setting, control algorithm, wireless telemetry, size, and power consumption. The benefits and limitations of the existing CLDBS systems are also presented. Whilst robust clinical proof of the benefits of the technology remains to be achieved, it has the potential to offer several advantages over open loop DBS. The CLDBS can improve efficiency and efficacy of therapy, eliminate lengthy start-up period for programming and adjustment, provide a personalized treatment, and make parameters setting automatic and adaptive.

  10. Building Blocks for Reliable Complex Nonlinear Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    2005-01-01

    This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations.

  11. Rings and arcs around evolved stars - I. Fingerprints of the last gasps in the formation process of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ramos-Larios, G.; Santamaría, E.; Guerrero, M. A.; Marquez-Lugo, R. A.; Sabin, L.; Toalá, J. A.

    2016-10-01

    Evolved stars such as asymptotic giant branch stars (AGB), post-AGB stars, proto-planetary nebulae (proto-PNe), and planetary nebulae (PNe) show rings and arcs around them and their nebular shells. We have searched for these morphological features in optical Hubble Space Telescope and mid-infrared Spitzer Space Telescope images of ˜650 proto-PNe and PNe and discovered them in 29 new sources. Adding those to previous detections, we derive a frequency of occurrence ≃8 per cent. All images have been processed to remove the underlying envelope emission and enhance outer faint structures to investigate the spacing between rings and arcs and their number. The averaged time lapse between consecutive rings and arcs is estimated to be in the range 500-1200 yr. The spacing between them is found to be basically constant for each source, suggesting that the mechanism responsible for the formation of these structures in the final stages of evolved stars is stable during time periods of the order of the total duration of the ejection. In our sample, this period of time spans ≤4500 yr.

  12. Biodiversity and ecosystem functioning in evolving food webs.

    PubMed

    Allhoff, K T; Drossel, B

    2016-05-19

    We use computer simulations in order to study the interplay between biodiversity and ecosystem functioning (BEF) during both the formation and the ongoing evolution of large food webs. A species in our model is characterized by its own body mass, its preferred prey body mass and the width of its potential prey body mass spectrum. On an ecological time scale, population dynamics determines which species are viable and which ones go extinct. On an evolutionary time scale, new species emerge as modifications of existing ones. The network structure thus emerges and evolves in a self-organized manner. We analyse the relation between functional diversity and five community level measures of ecosystem functioning. These are the metabolic loss of the predator community, the total biomasses of the basal and the predator community, and the consumption rates on the basal community and within the predator community. Clear BEF relations are observed during the initial build-up of the networks, or when parameters are varied, causing bottom-up or top-down effects. However, ecosystem functioning measures fluctuate only very little during long-term evolution under constant environmental conditions, despite changes in functional diversity. This result supports the hypothesis that trophic cascades are weaker in more complex food webs. © 2016 The Author(s).

  13. Cis-Lunar Reusable In-Space Transportation Architecture for the Evolvable Mars Campaign

    NASA Technical Reports Server (NTRS)

    McVay, Eric S.; Jones, Christopher A.; Merrill, Raymond G.

    2016-01-01

    Human exploration missions to Mars or other destinations in the solar system require large quantities of propellant to enable the transportation of required elements from Earth's sphere of influence to Mars. Current and proposed launch vehicles are incapable of launching all of the requisite mass on a single vehicle; hence, multiple launches and in-space aggregation are required to perform a Mars mission. This study examines the potential of reusable chemical propulsion stages based in cis-lunar space to meet the transportation objectives of the Evolvable Mars Campaign and identifies cis-lunar propellant supply requirements. These stages could be supplied with fuel and oxidizer delivered to cis-lunar space, either launched from Earth or other inner solar system sources such as the Moon or near Earth asteroids. The effects of uncertainty in the model parameters are evaluated through sensitivity analysis of key parameters including the liquid propellant combination, inert mass fraction of the vehicle, change in velocity margin, and change in payload masses. The outcomes of this research include a description of the transportation elements, the architecture that they enable, and an option for a campaign that meets the objectives of the Evolvable Mars Campaign. This provides a more complete understanding of the propellant requirements, as a function of time, that must be delivered to cis-lunar space. Over the selected sensitivity ranges for the current payload and schedule requirements of the 2016 point of departure of the Evolvable Mars Campaign destination systems, the resulting propellant delivery quantities are between 34 and 61 tonnes per year of hydrogen and oxygen propellant, or between 53 and 76 tonnes per year of methane and oxygen propellant, or between 74 and 92 tonnes per year of hypergolic propellant. These estimates can guide future propellant manufacture and/or delivery architectural analysis.

  14. Numerical simulations of the Cordilleran ice sheet through the last glacial cycle

    NASA Astrophysics Data System (ADS)

    Seguinot, Julien; Rogozhina, Irina; Stroeven, Arjen P.; Margold, Martin; Kleman, Johan

    2016-03-01

    After more than a century of geological research, the Cordilleran ice sheet of North America remains among the least understood in terms of its former extent, volume, and dynamics. Because of the mountainous topography on which the ice sheet formed, geological studies have often had only local or regional relevance and shown such a complexity that ice-sheet-wide spatial reconstructions of advance and retreat patterns are lacking. Here we use a numerical ice sheet model calibrated against field-based evidence to attempt a quantitative reconstruction of the Cordilleran ice sheet history through the last glacial cycle. A series of simulations is driven by time-dependent temperature offsets from six proxy records located around the globe. Although this approach reveals large variations in model response to evolving climate forcing, all simulations produce two major glaciations during marine oxygen isotope stages 4 (62.2-56.9 ka) and 2 (23.2-16.9 ka). The timing of glaciation is better reproduced using temperature reconstructions from Greenland and Antarctic ice cores than from regional oceanic sediment cores. During most of the last glacial cycle, the modelled ice cover is discontinuous and restricted to high mountain areas. However, widespread precipitation over the Skeena Mountains favours the persistence of a central ice dome throughout the glacial cycle. It acts as a nucleation centre before the Last Glacial Maximum and hosts the last remains of Cordilleran ice until the middle Holocene (6.7 ka).

  15. Applications of Evolving Robotic Technology for Head and Neck Surgery.

    PubMed

    Sharma, Arun; Albergotti, W Greer; Duvvuri, Umamaheswar

    2016-03-01

    Assess the use and potential benefits of a new robotic system for transoral radical tonsillectomy, transoral supraglottic laryngectomy, and retroauricular thyroidectomy in a cadaver dissection. Three previously described robotic procedures (transoral radical tonsillectomy, transoral supraglottic laryngectomy, and retroauricular thyroidectomy) were performed in a cadaver using the da Vinci Xi Surgical System. Surgical exposure and access, operative time, and number of collisions were examined objectively. The new robotic system was used to perform transoral radical tonsillectomy with dissection and preservation of glossopharyngeal nerve branches, transoral supraglottic laryngectomy, and retroauricular thyroidectomy. There was excellent exposure without any difficulties in access. Robotic operative times (excluding set-up and docking times) for the 3 procedures in the cadaver were 12.7, 14.3, and 21.2 minutes (excluding retroauricular incision and subplatysmal elevation), respectively. No robotic arm collisions were noted during these 3 procedures. The retroauricular thyroidectomy was performed using 4 robotic ports, each with 8 mm instruments. The use of updated and evolving robotic technology improves the ease of previously described robotic head and neck procedures and may allow surgeons to perform increasingly complex surgeries. © The Author(s) 2015.

  16. Direct Numerical Simulations of Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Tryggvason, Gretar

    2013-03-01

    Many natural and industrial processes, such as rain and gas exchange between the atmosphere and oceans, boiling heat transfer, atomization and chemical reactions in bubble columns, involve multiphase flows. Often the mixture can be described as a disperse flow where one phase consists of bubbles or drops. Direct numerical simulations (DNS) of disperse flow have recently been used to study the dynamics of multiphase flows with a large number of bubbles and drops, often showing that the collective motion results in relatively simple large-scale structure. Here we review simulations of bubbly flows in vertical channels where the flow direction, as well as the bubble deformability, has profound implications on the flow structure and the total flow rate. Results obtained so far are summarized and open questions identified. The resolution for DNS of multiphase flows is usually determined by a dominant scale, such as the average bubble or drop size, but in many cases much smaller scales are also present. These scales often consist of thin films, threads, or tiny drops appearing during coalescence or breakup, or are due to the presence of additional physical processes that operate on a very different time scale than the fluid flow. The presence of these small-scale features demand excessive resolution for conventional numerical approaches. However, at small flow scales the effects of surface tension are generally strong so the interface geometry is simple and viscous forces dominate the flow and keep it simple also. These are exactly the conditions under which analytical models can be used and we will discuss efforts to combine a semi-analytical description for the small-scale processes with a fully resolved simulation of the rest of the flow. We will, in particular, present an embedded analytical description to capture the mass transfer from bubbles in liquids where the diffusion of mass is much slower than the diffusion of momentum. This results in very thin mass

  17. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 1, Numerical methods and input instructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.; Budden, M.J.

    This document describes the numerical methods, current capabilities, and the use of the TEMPEST (Version L, MOD 2) computer program. TEMPEST is a transient, three-dimensional, hydrothermal computer program that is designed to analyze a broad range of coupled fluid dynamic and heat transfer systems of particular interest to the Fast Breeder Reactor thermal-hydraulic design community. The full three-dimensional, time-dependent equations of motion, continuity, and heat transport are solved for either laminar or turbulent fluid flow, including heat diffusion and generation in both solid and liquid materials. 10 refs., 22 figs., 2 tabs.

  18. The Advantage of Playing Home in NBA: Microscopic, Team-Specific and Evolving Features

    PubMed Central

    Ribeiro, Haroldo V.; Mukherjee, Satyam; Zeng, Xiao Han T.

    2016-01-01

    The idea that the success rate of a team increases when playing home is broadly accepted and documented for a wide variety of sports. Investigations on the so-called “home advantage phenomenon” date back to the 70’s and ever since has attracted the attention of scholars and sport enthusiasts. These studies have been mainly focused on identifying the phenomenon and trying to correlate it with external factors such as crowd noise and referee bias. Much less is known about the effects of home advantage in the “microscopic” dynamics of the game (within the game) or possible team-specific and evolving features of this phenomenon. Here we present a detailed study of these previous features in the National Basketball Association (NBA). By analyzing play-by-play events of more than sixteen thousand games that span thirteen NBA seasons, we have found that home advantage affects the microscopic dynamics of the game by increasing the scoring rates and decreasing the time intervals between scores of teams playing home. We verified that these two features are different among the NBA teams, for instance, the scoring rate of the Cleveland Cavaliers team is increased ≈0.16 points per minute (on average the seasons 2004–05 to 2013–14) when playing home, whereas for the New Jersey Nets (now the Brooklyn Nets) this rate increases in only ≈0.04 points per minute. We further observed that these microscopic features have evolved over time in a non-trivial manner when analyzing the results team-by-team. However, after averaging over all teams some regularities emerge; in particular, we noticed that the average differences in the scoring rates and in the characteristic times (related to the time intervals between scores) have slightly decreased over time, suggesting a weakening of the phenomenon. This study thus adds evidence of the home advantage phenomenon and contributes to a deeper understanding of this effect over the course of games. PMID:27015636

  19. The Advantage of Playing Home in NBA: Microscopic, Team-Specific and Evolving Features.

    PubMed

    Ribeiro, Haroldo V; Mukherjee, Satyam; Zeng, Xiao Han T

    2016-01-01

    The idea that the success rate of a team increases when playing home is broadly accepted and documented for a wide variety of sports. Investigations on the so-called "home advantage phenomenon" date back to the 70's and ever since has attracted the attention of scholars and sport enthusiasts. These studies have been mainly focused on identifying the phenomenon and trying to correlate it with external factors such as crowd noise and referee bias. Much less is known about the effects of home advantage in the "microscopic" dynamics of the game (within the game) or possible team-specific and evolving features of this phenomenon. Here we present a detailed study of these previous features in the National Basketball Association (NBA). By analyzing play-by-play events of more than sixteen thousand games that span thirteen NBA seasons, we have found that home advantage affects the microscopic dynamics of the game by increasing the scoring rates and decreasing the time intervals between scores of teams playing home. We verified that these two features are different among the NBA teams, for instance, the scoring rate of the Cleveland Cavaliers team is increased ≈0.16 points per minute (on average the seasons 2004-05 to 2013-14) when playing home, whereas for the New Jersey Nets (now the Brooklyn Nets) this rate increases in only ≈0.04 points per minute. We further observed that these microscopic features have evolved over time in a non-trivial manner when analyzing the results team-by-team. However, after averaging over all teams some regularities emerge; in particular, we noticed that the average differences in the scoring rates and in the characteristic times (related to the time intervals between scores) have slightly decreased over time, suggesting a weakening of the phenomenon. This study thus adds evidence of the home advantage phenomenon and contributes to a deeper understanding of this effect over the course of games.

  20. Satellite transitions acquired in real time by magic angle spinning (STARTMAS): ``Ultrafast'' high-resolution MAS NMR spectroscopy of spin I =3/2 nuclei

    NASA Astrophysics Data System (ADS)

    Thrippleton, Michael J.; Ball, Thomas J.; Wimperis, Stephen

    2008-01-01

    The satellite transitions acquired in real time by magic angle spinning (STARTMAS) NMR experiment combines a train of pulses with sample rotation at the magic angle to refocus the first- and second-order quadrupolar broadening of spin I =3/2 nuclei in a series of echoes, while allowing the isotropic chemical and quadrupolar shifts to evolve. The result is real-time isotropic NMR spectra at high spinning rates using conventional MAS equipment. In this paper we describe in detail how STARTMAS data can be acquired and processed with ease on commercial equipment. We also discuss the advantages and limitations of the approach and illustrate the discussion with numerical simulations and experimental data from four different powdered solids.