The Monash University Interactive Simple Climate Model
NASA Astrophysics Data System (ADS)
Dommenget, D.
2013-12-01
The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.
A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)
NASA Astrophysics Data System (ADS)
Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.
2014-05-01
Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/
Stimulation from Simulation? A Teaching Model of Hillslope Hydrology for Use on Microcomputers.
ERIC Educational Resources Information Center
Burt, Tim; Butcher, Dave
1986-01-01
The design and use of a simple computer model which simulates a hillslope hydrology is described in a teaching context. The model shows a relatively complex environmental system can be constructed on the basis of a simple but realistic theory, thus allowing students to simulate the hydrological response of real hillslopes. (Author/TRS)
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
A simple model of hohlraum power balance and mitigation of SRS
Albright, Brian J.; Montgomery, David S.; Yin, Lin; ...
2016-04-01
A simple energy balance model has been obtained for laser-plasma heating in indirect drive hohlraum plasma that allows rapid temperature scaling and evolution with parameters such as plasma density and composition. Furthermore, this model enables assessment of the effects on plasma temperature of, e.g., adding high-Z dopant to the gas fill or magnetic fields.
Beyond harmonic sounds in a simple model for birdsong production.
Amador, Ana; Mindlin, Gabriel B
2008-12-01
In this work we present an analysis of the dynamics displayed by a simple bidimensional model of labial oscillations during birdsong production. We show that the same model capable of generating tonal sounds can present, for a wide range of parameters, solutions which are spectrally rich. The role of physiologically sensible parameters is discussed in each oscillatory regime, allowing us to interpret previously reported data.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Coulomb explosion of uniformly charged spheroids
NASA Astrophysics Data System (ADS)
Grech, M.; Nuter, R.; Mikaberidze, A.; di Cintio, P.; Gremillet, L.; Lefebvre, E.; Saalmann, U.; Rost, J. M.; Skupin, S.
2011-11-01
A simple, semianalytical model is proposed for nonrelativistic Coulomb explosion of a uniformly charged spheroid. This model allows us to derive the time-dependent particle energy distributions. Simple expressions are also given for the characteristic explosion time and maximum particle energies in the limits of extreme prolate and oblate spheroids as well as for the sphere. Results of particle simulations are found to be in remarkably good agreement with the model.
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
Development of a Training Model for Laparoscopic Common Bile Duct Exploration
Rodríguez, Omaira; Benítez, Gustavo; Sánchez, Renata; De la Fuente, Liliana
2010-01-01
Background: Training and experience of the surgical team are fundamental for the safety and success of complex surgical procedures, such as laparoscopic common bile duct exploration. Methods: We describe an inert, simple, very low-cost, and readily available training model. Created using a “black box” and basic medical and surgical material, it allows training in the fundamental steps necessary for laparoscopic biliary tract surgery, namely, (1) intraoperative cholangiography, (2) transcystic exploration, and (3) laparoscopic choledochotomy, and t-tube insertion. Results: The proposed model has allowed for the development of the skills necessary for partaking in said procedures, contributing to its development and diminishing surgery time as the trainee advances down the learning curve. Further studies are directed towards objectively determining the impact of the model on skill acquisition. Conclusion: The described model is simple and readily available allowing for accurate reproduction of the main steps and maneuvers that take place during laparoscopic common bile duct exploration, with the purpose of reducing failure and complications. PMID:20529526
Tracking trade transactions in water resource systems: A node-arc optimization formulation
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Huskova, Ivana; Harou, Julien J.
2013-05-01
We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).
Fürst, Rafael Vilhena de Carvalho; Polimanti, Afonso César; Galego, Sidnei José; Bicudo, Maria Claudia; Montagna, Erik; Corrêa, João Antônio
2017-03-01
To present a simple and affordable model able to properly simulate an ultrasound-guided venous access. The simulation was made using a latex balloon tube filled with water and dye solution implanted in a thawed chicken breast with bones. The presented model allows the simulation of all implant stages of a central catheter. The obtained echogenicity is similar to that observed in human tissue, and the ultrasound identification of the tissues, balloon, needle, wire guide and catheter is feasible and reproducible. The proposed model is simple, economical, easy to manufacture and capable of realistically and effectively simulating an ultrasound-guided venous access.
The chicken foot digital replant training model.
Athanassopoulos, Thanassi; Loh, Charles Yuen Yung
2015-01-01
A simple, readily available digital replantation model in the chicken foot is described. This high fidelity model will hopefully allow trainees in hand surgery to gain further experience in replant surgery prior to clinical application.
Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.
Ferrarini, L; Trioni, M
1996-01-01
This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Mathematical neuroscience: from neurons to circuits to systems.
Gutkin, Boris; Pinto, David; Ermentrout, Bard
2003-01-01
Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.
The Storm Water Management Model Climate Adjustment Tool (SWMM-CAT) is a simple to use software utility that allows future climate change projections to be incorporated into the Storm Water Management Model (SWMM).
Simple construction and performance of a conical plastic cryocooler
NASA Technical Reports Server (NTRS)
Lambert, N.
1985-01-01
Low power cryocoolers with conical displacers offer several advantages over stepped displacers. The described fabrication process allows quick and reproducible manufacturing of plastic conical displacer units. This could be of commercial interest, but it also makes systematic optimization feasible by constructing a number of different models. The process allows for a wide range of displacer profiles. Low temperature performance as dominated by regenerator losses, and several effects are discussed. A simple device is described which controls gas flow during expansion.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
Inexpensive Laboratory Model with Many Applications.
ERIC Educational Resources Information Center
Archbold, Norbert L.; Johnson, Robert E.
1987-01-01
Presents a simple, inexpensive and realistic model which allows introductory geology students to obtain subsurface information through a simulated drilling experience. Offers ideas on additional applications to a variety of geologic situations. (ML)
Energy economy in the actomyosin interaction: lessons from simple models.
Lehman, Steven L
2010-01-01
The energy economy of the actomyosin interaction in skeletal muscle is both scientifically fascinating and practically important. This chapter demonstrates how simple cross-bridge models have guided research regarding the energy economy of skeletal muscle. Parameter variation on a very simple two-state strain-dependent model shows that early events in the actomyosin interaction strongly influence energy efficiency, and late events determine maximum shortening velocity. Addition of a weakly-bound state preceding force production allows weak coupling of cross-bridge mechanics and ATP turnover, so that a simple three-state model can simulate the velocity-dependence of ATP turnover. Consideration of the limitations of this model leads to a review of recent evidence regarding the relationship between ligand binding states, conformational states, and macromolecular structures of myosin cross-bridges. Investigation of the fine structure of the actomyosin interaction during the working stroke continues to inform fundamental research regarding the energy economy of striated muscle.
A comprehensive surface-groundwater flow model
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert
1993-02-01
In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.
Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried
2015-10-21
A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.
Simple Spectral Lines Data Model Version 1.0
NASA Astrophysics Data System (ADS)
Osuna, Pedro; Salgado, Jesus; Guainazzi, Matteo; Dubernet, Marie-Lise; Roueff, Evelyne; Osuna, Pedro; Salgado, Jesus
2010-12-01
This document presents a Data Model to describe Spectral Line Transitions in the context of the Simple Line Access Protocol defined by the IVOA (c.f. Ref[13] IVOA Simple Line Access protocol) The main objective of the model is to integrate with and support the Simple Line Access Protocol, with which it forms a compact unit. This integration allows seamless access to Spectral Line Transitions available worldwide in the VO context. This model does not provide a complete description of Atomic and Molecular Physics, which scope is outside of this document. In the astrophysical sense, a line is considered as the result of a transition between two energy levels. Under the basis of this assumption, a whole set of objects and attributes have been derived to define properly the necessary information to describe lines appearing in astrophysical contexts. The document has been written taking into account available information from many different Line data providers (see acknowledgments section).
[A new model fo the evaluation of measurements of the neurocranium].
Seidler, H; Wilfing, H; Weber, G; Traindl-Prohazka, M; zur Nedden, D; Platzer, W
1993-12-01
A simple and user-friendly model for trigonometric description of the neurocranium based on newly defined points of measurement is presented. This model not only provides individual description, but also allows for an evaluation of developmental and phylogenetic aspects.
A-Priori Tuning of Modified Magnussen Combustion Model
NASA Technical Reports Server (NTRS)
Norris, A. T.
2016-01-01
In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.
Effects of host social hierarchy on disease persistence.
Davidson, Ross S; Marion, Glenn; Hutchings, Michael R
2008-08-07
The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.
NASA Astrophysics Data System (ADS)
Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus
2017-07-01
Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alberti, Tommaso; Carbone, Vincenzo; Lepreti, Fabio
The recent discovery of the planetary system hosted by the ultracool dwarf star TRAPPIST-1 could open new paths for investigations of the planetary climates of Earth-sized exoplanets, their atmospheres, and their possible habitability. In this paper, we use a simple climate-vegetation energy-balance model to study the climate of the seven TRAPPIST-1 planets and the climate dependence on various factors: the global albedo, the fraction of vegetation that could cover their surfaces, and the different greenhouse conditions. The model allows us to investigate whether liquid water could be maintained on the planetary surfaces (i.e., by defining a “surface water zone (SWZ)”)more » in different planetary conditions, with or without the presence of a greenhouse effect. It is shown that planet TRAPPIST-1d seems to be the most stable from an Earth-like perspective, since it resides in the SWZ for a wide range of reasonable values of the model parameters. Moreover, according to the model, outer planets (f, g, and h) cannot host liquid water on their surfaces, even with Earth-like conditions, entering a snowball state. Although very simple, the model allows us to extract the main features of the TRAPPIST-1 planetary climates.« less
A simple model for indentation creep
NASA Astrophysics Data System (ADS)
Ginder, Ryan S.; Nix, William D.; Pharr, George M.
2018-03-01
A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.
Implications of Biospheric Energization
NASA Astrophysics Data System (ADS)
Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet
2016-07-01
Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.
Constructing a simple parametric model of shoulder from medical images
NASA Astrophysics Data System (ADS)
Atmani, H.; Fofi, D.; Merienne, F.; Trouilloud, P.
2006-02-01
The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.
A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance
NASA Technical Reports Server (NTRS)
Woolley, Ryan C.
2014-01-01
The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.
A simple model for the evolution of melt pond coverage on permeable Arctic sea ice
NASA Astrophysics Data System (ADS)
Popović, Predrag; Abbot, Dorian
2017-05-01
As the melt season progresses, sea ice in the Arctic often becomes permeable enough to allow for nearly complete drainage of meltwater that has collected on the ice surface. Melt ponds that remain after drainage are hydraulically connected to the ocean and correspond to regions of sea ice whose surface is below sea level. We present a simple model for the evolution of melt pond coverage on such permeable sea ice floes in which we allow for spatially varying ice melt rates and assume the whole floe is in hydrostatic balance. The model is represented by two simple ordinary differential equations, where the rate of change of pond coverage depends on the pond coverage. All the physical parameters of the system are summarized by four strengths that control the relative importance of the terms in the equations. The model both fits observations and allows us to understand the behavior of melt ponds in a way that is often not possible with more complex models. Examples of insights we can gain from the model are that (1) the pond growth rate is more sensitive to changes in bare sea ice albedo than changes in pond albedo, (2) ponds grow slower on smoother ice, and (3) ponds respond strongest to freeboard sinking on first-year ice and sidewall melting on multiyear ice. We also show that under a global warming scenario, pond coverage would increase, decreasing the overall ice albedo and leading to ice thinning that is likely comparable to thinning due to direct forcing. Since melt pond coverage is one of the key parameters controlling the albedo of sea ice, understanding the mechanisms that control the distribution of pond coverage will help improve large-scale model parameterizations and sea ice forecasts in a warming climate.
2012-11-29
of localized states extending into the gap. We also introduced a simple model allowing estimates of the upper limit of the intra-grain mobility in...well as to pentacene , and DATT. This research will be described below. In addition to our work on the electronic structure and charge mobility, we have...stacking distance gives rise to a tail of localized states which act as traps for electrons and holes. We introduced a simple effective Hamiltonian model
Two simple models of classical heat pumps.
Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek
2007-03-01
Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.
Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas
NASA Astrophysics Data System (ADS)
Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.
2003-04-01
Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.
Teaching Mathematical Modelling: Demonstrating Enrichment and Elaboration
ERIC Educational Resources Information Center
Warwick, Jon
2015-01-01
This paper uses a series of models to illustrate one of the fundamental processes of model building--that of enrichment and elaboration. The paper describes how a problem context is given which allows a series of models to be developed from a simple initial model using a queuing theory framework. The process encourages students to think about the…
A Simple Interactive Introduction to Teaching Genetic Engineering
ERIC Educational Resources Information Center
Child, Paula
2013-01-01
In the UK, at key stage 4, students aged 14-15 studying GCSE Core Science or Unit 1 of the GCSE Biology course are required to be able to describe the process of genetic engineering to produce bacteria that can produce insulin. The simple interactive introduction described in this article allows students to consider the problem, devise a model and…
Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.
2008-01-01
The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
Simple Model of Macroscopic Instability in XeCl Discharge Pumped Lasers
NASA Astrophysics Data System (ADS)
Ahmed, Belasri; Zoheir, Harrache
2003-10-01
The aim of this work is to study the development of the macroscopic non uniformity of the electron density of high pressure discharge for excimer lasers and eventually its propagation because of the medium kinetics phenomena. This study is executed using a transverse mono-dimensional model, in which the plasma is represented by a set of resistance's in parallel. This model was employed using a numerical code including three strongly coupled parts: electric circuit equations, electron Boltzmann equation, and kinetics equations (chemical kinetics model). The time variations of the electron density in each plasma element are obtained by solving a set of ordinary differential equations describing the plasma kinetics and external circuit. The use of the present model allows a good comprehension of the halogen depletion phenomena, which is the principal cause of laser ending and allows a simple study of a large-scale non uniformity in preionization density and its effects on electrical and chemical plasma properties. The obtained results indicate clearly that about 50consumed at the end of the pulse. KEY WORDS Excimer laser, XeCl, Modeling, Cold plasma, Kinetic, Halogen depletion, Macroscopic instability.
ERIC Educational Resources Information Center
Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel
2015-01-01
A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…
NASA Astrophysics Data System (ADS)
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
2015-12-01
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.
ERIC Educational Resources Information Center
Gilbert, R.
1979-01-01
Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)
Application of simple negative feedback model for avalanche photodetectors investigation
NASA Astrophysics Data System (ADS)
Kushpil, V. V.
2009-10-01
A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.
Simple inflationary quintessential model. II. Power law potentials
NASA Astrophysics Data System (ADS)
de Haro, Jaume; Amorós, Jaume; Pan, Supriya
2016-09-01
The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive improvement of the observational data.
On a simple molecular–statistical model of a liquid-crystal suspension of anisometric particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakhlevnykh, A. N., E-mail: anz@psu.ru; Lubnin, M. S.; Petrov, D. A.
2016-11-15
A molecular–statistical mean-field theory is constructed for suspensions of anisometric particles in nematic liquid crystals (NLCs). The spherical approximation, well known in the physics of ferromagnetic materials, is considered that allows one to obtain an analytic expression for the free energy and simple equations for the orientational state of a suspension that describe the temperature dependence of the order parameters of the suspension components. The transition temperature from ordered to isotropic state and the jumps in the order parameters at the phase-transition point are studied as a function of the anchoring energy of dispersed particles to the matrix, the concentrationmore » of the impurity phase, and the size of particles. The proposed approach allows one to generalize the model to the case of biaxial ordering.« less
Meningomyelocele Simulation Model: Pre-surgical Management–Technical Report
Angert, Robert M
2018-01-01
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed. PMID:29713576
Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes
NASA Astrophysics Data System (ADS)
Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi
2018-05-01
Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.
Meningomyelocele Simulation Model: Pre-surgical Management-Technical Report.
Rosen, Orna; Angert, Robert M
2018-02-26
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed.
A simple model is presented that allows the pressure difference in a subslab aggregate layer to be estimated as a function of radial distance from the central suction point of an active subslab depressurization system by knowing the average size, thickness, porosity, and permeabi...
The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement
ERIC Educational Resources Information Center
Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.
2012-01-01
This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…
A SCREENING MODEL FOR SIMULATING DNAPL FLOW AND TRANSPORT IN POROUS MEDIA: THEORETICAL DEVELOPMENT
There exists a need for a simple tool that will allow us to analyze a DNAPL contamination scenario from free-product release to transport of soluble constituents to downgradient receptor wells. The objective of this manuscript is to present the conceptual model and formulate the ...
Synapse fits neuron: joint reduction by model inversion.
van der Scheer, H T; Doelman, A
2017-08-01
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
McKenna, James E.
2000-01-01
Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.
Low Reynolds number two-equation modeling of turbulent flows
NASA Technical Reports Server (NTRS)
Michelassi, V.; Shih, T.-H.
1991-01-01
A k-epsilon model that accounts for viscous and wall effects is presented. The proposed formulation does not contain the local wall distance thereby making very simple the application to complex geometries. The formulation is based on an existing k-epsilon model that proved to fit very well with the results of direct numerical simulation. The new form is compared with nine different two-equation models and with direct numerical simulation for a fully developed channel flow at Re = 3300. The simple flow configuration allows a comparison free from numerical inaccuracies. The computed results prove that few of the considered forms exhibit a satisfactory agreement with the channel flow data. The model shows an improvement with respect to the existing formulations.
Gironés, Xavier; Carbó-Dorca, Ramon; Ponec, Robert
2003-01-01
A new approach allowing the theoretical modeling of the electronic substituent effect is proposed. The approach is based on the use of fragment Quantum Self-Similarity Measures (MQS-SM) calculated from domain averaged Fermi Holes as new theoretical descriptors allowing for the replacement of Hammett sigma constants in QSAR models. To demonstrate the applicability of this new approach its formalism was applied to the description of the substituent effect on the dissociation of a broad series of meta and para substituted benzoic acids. The accuracy and the predicting power of this new approach was tested on the comparison with a recent exhaustive study by Sullivan et al. It has been shown that the accuracy and the predicting power of both procedures is comparable, but, in contrast to a five-parameter correlation equation necessary to describe the data in the study, our approach is more simple and, in fact, only a simple one-parameter correlation equation is required.
Assimilation of satellite color observations in a coupled ocean GCM-ecosystem model
NASA Technical Reports Server (NTRS)
Sarmiento, Jorge L.
1992-01-01
Monthly average coastal zone color scanner (CZCS) estimates of chlorophyll concentration were assimilated into an ocean global circulation model(GCM) containing a simple model of the pelagic ecosystem. The assimilation was performed in the simplest possible manner, to allow the assessment of whether there were major problems with the ecosystem model or with the assimilation procedure. The current ecosystem model performed well in some regions, but failed in others to assimilate chlorophyll estimates without disrupting important ecosystem properties. This experiment gave insight into those properties of the ecosystem model that must be changed to allow data assimilation to be generally successful, while raising other important issues about the assimilation procedure.
Szilágyi, N; Kovács, R; Kenyeres, I; Csikor, Zs
2013-01-01
Biofilm development in a fixed bed biofilm reactor system performing municipal wastewater treatment was monitored aiming at accumulating colonization and maximum biofilm mass data usable in engineering practice for process design purposes. Initially a 6 month experimental period was selected for investigations where the biofilm formation and the performance of the reactors were monitored. The results were analyzed by two methods: for simple, steady-state process design purposes the maximum biofilm mass on carriers versus influent load and a time constant of the biofilm growth were determined, whereas for design approaches using dynamic models a simple biofilm mass prediction model including attachment and detachment mechanisms was selected and fitted to the experimental data. According to a detailed statistical analysis, the collected data have not allowed us to determine both the time constant of biofilm growth and the maximum biofilm mass on carriers at the same time. The observed maximum biofilm mass could be determined with a reasonable error and ranged between 438 gTS/m(2) carrier surface and 843 gTS/m(2), depending on influent load, and hydrodynamic conditions. The parallel analysis of the attachment-detachment model showed that the experimental data set allowed us to determine the attachment rate coefficient which was in the range of 0.05-0.4 m d(-1) depending on influent load and hydrodynamic conditions.
Simplified aeroelastic modeling of horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Wendell, J. H.
1982-01-01
Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Goal programming for land use planning.
Enoch F. Bell
1976-01-01
A simple transformation of the linear programing model used in land use planning to a goal programing model allows the multiple goals implied by multiple use management to be explicitly recognized. This report outlines the procedure for accomplishing the transformation and discusses problems with use of goal programing. Of particular concern are the expert opinions...
The PMDB Protein Model Database
Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna
2006-01-01
The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873
SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?
NASA Astrophysics Data System (ADS)
Rührmair, Ulrich
This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
Plant Intellectual Property Transfer Mechanisms at US Universities.
ERIC Educational Resources Information Center
Price, Steven C.; Renk, Bryan Z.
2000-01-01
U.S. colleges of agriculture and technology transfer offices have historically been in conflict over the management of plant varieties. A simple model that would allow these competing systems to become integrated uses a decision tree. (Author/JOW)
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
Learning Orthographic Structure With Sequential Generative Neural Networks.
Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco
2016-04-01
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.
Modeling Studies of the Effects of Winds and Heat Flux on the Tropical Oceans
NASA Technical Reports Server (NTRS)
Seager, R.
1999-01-01
Over a decade ago, funding from this NASA grant supported the development of the Cane-Zebiak ENSO prediction model which remains in use to this day. It also supported our work developing schemes for modeling the air-sea heat flux in ocean models used for studying climate variability. We introduced a succession of simple boundary layer models that allow the fluxes to be computed internally in the model and avoid the need to specify the atmospheric thermodynamic state. These models have now reached a level of generality that allows modeling of the global, rather than just tropical, ocean, including sea ice cover. The most recent versions of these boundary layer models have been widely distributed around the world and are in use by many ocean modeling groups.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Application of Influence Diagrams in Identifying Soviet Satellite Missions
1990-12-01
Probabilities Comparison ......................... 58 35. Continuous Model Variables ............................ 59 36. Sample Inclination Data...diagramming is a method which allows the simple construction of a model to illustrate the interrelationships which exist among variables by capturing an...environmental monitoring systems. The module also contained an array of instruments for geophysical and astrophysical experimentation . 4.3.14.3 Soyuz. The Soyuz
Rubber Bands as Model Polymers in Couette Flow
ERIC Educational Resources Information Center
Dunstan, Dave E.
2008-01-01
We present a simple device for demonstrating the essential aspects of polymers in flow in the classroom. Rubber bands are used as a macroscopic model of polymers to allow direct visual observation of the flow-induced changes in orientation and conformation. A transparent Perspex Couette cell, constructed from two sections of a tube, is used to…
NASA Astrophysics Data System (ADS)
Wilds, Roy; Kauffman, Stuart A.; Glass, Leon
2008-09-01
We study the evolution of complex dynamics in a model of a genetic regulatory network. The fitness is associated with the topological entropy in a class of piecewise linear equations, and the mutations are associated with changes in the logical structure of the network. We compare hill climbing evolution, in which only mutations that increase the fitness are allowed, with neutral evolution, in which mutations that leave the fitness unchanged are allowed. The simple structure of the fitness landscape enables us to estimate analytically the rates of hill climbing and neutral evolution. In this model, allowing neutral mutations accelerates the rate of evolutionary advancement for low mutation frequencies. These results are applicable to evolution in natural and technological systems.
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
In vivo neuronal calcium imaging in C. elegans.
Chung, Samuel H; Sun, Lin; Gabel, Christopher V
2013-04-10
The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon and GCaMP allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.
A Simple Model of the Pulmonary Circulation for Hemodynamic Study and Examination.
ERIC Educational Resources Information Center
Gaar, Kermit A., Jr.
1983-01-01
Describes a computer program allowing students to study such circulatory variables as venus return, cardiac output, mean circulatory filling pressure, resistance to venous return, and equilibrium point. Documentation for this Applesoft program (or diskette) is available from author. (JM)
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Technical Reports Server (NTRS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-01-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-05-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics
NASA Astrophysics Data System (ADS)
García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team
2016-06-01
We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, ćaǧlar Ö.; Zettl, A.
2007-11-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, Caglar; Zettl, Alex
2008-03-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
A simple nonlocal damage model for predicting failure of notched laminates
NASA Technical Reports Server (NTRS)
Kennedy, T. C.; Nahan, M. F.
1995-01-01
The ability to predict failure loads in notched composite laminates is a requirement in a variety of structural design circumstances. A complicating factor is the development of a zone of damaged material around the notch tip. The objective of this study was to develop a computational technique that simulates progressive damage growth around a notch in a manner that allows the prediction of failure over a wide range of notch sizes. This was accomplished through the use of a relatively simple, nonlocal damage model that incorporates strain-softening. This model was implemented in a two-dimensional finite element program. Calculations were performed for two different laminates with various notch sizes under tensile loading, and the calculations were found to correlate well with experimental results.
Simple Climate Model Evaluation Using Impulse Response Tests
NASA Astrophysics Data System (ADS)
Schwarber, A.; Hartin, C.; Smith, S. J.
2017-12-01
Simple climate models (SCMs) are central tools used to incorporate climate responses into human-Earth system modeling. SCMs are computationally inexpensive, making them an ideal tool for a variety of analyses, including consideration of uncertainty. Despite their wide use, many SCMs lack rigorous testing of their fundamental responses to perturbations. Here, following recommendations of a recent National Academy of Sciences report, we compare several SCMs (Hector-deoclim, MAGICC 5.3, MAGICC 6.0, and the IPCC AR5 impulse response function) to diagnose model behavior and understand the fundamental system responses within each model. We conduct stylized perturbations (emissions and forcing/concentration) of three different chemical species: CO2, CH4, and BC. We find that all 4 models respond similarly in terms of overall shape, however, there are important differences in the timing and magnitude of the responses. For example, the response to a BC pulse differs over the first 20 years after the pulse among the models, a finding that is due to differences in model structure. Such perturbation experiments are difficult to conduct in complex models due to internal model noise, making a direct comparison with simple models challenging. We can, however, compare the simplified model response from a 4xCO2 step experiment to the same stylized experiment carried out by CMIP5 models, thereby testing the ability of SCMs to emulate complex model results. This work allows an assessment of how well current understanding of Earth system responses are incorporated into multi-model frameworks by way of simple climate models.
Design and Training of Limited-Interconnect Architectures
1991-07-16
and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for
Using Confidence as Feedback in Multi-Sized Learning Environments
ERIC Educational Resources Information Center
Hench, Thomas L.
2014-01-01
This paper describes the use of existing confidence and performance data to provide feedback by first demonstrating the data's fit to a simple linear model. The paper continues by showing how the model's use as a benchmark provides feedback to allow current or future students to infer either the difficulty or the degree of under or over…
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling
NASA Astrophysics Data System (ADS)
Kennedy, Mark William
Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.
Study of the Local Horizon. (Spanish Title: Estudio del Horizonte Local.) Estudo do Horizonte Local
NASA Astrophysics Data System (ADS)
Ros, Rosa M.
2009-12-01
The study of the horizon is fundamental to easy the first observations of the students at any education center. A simple model, to be developed in each center, allows to easy the study and comprehension of the rudiments of astronomy. The constructed model is presented in turn as a simple equatorial clock, other models (horizontal and vertical) may be constructed starting from it. El estudio del horizonte es fundamental para poder facilitar las primeras observaciones de los alumnos en un centro educativo. Un simple modelo, que debe realizarse para cada centro, nos permite facilitar el estudio y la comprensión de los primeros rudimentos astronómicos. El modelo construido se presenta a su vez como un sencillo modelo de reloj ecuatorial y a partir de él se pueden construir otros modelos (horizontal y vertical). O estudo do horizonte é fundamental para facilitar as primeiras observações dos alunos num centro educativo. Um modelo simples, que deve ser feito para cada centro, permite facilitar o estudo e a compreensão dos primeiros rudimentos astronômicos. O modelo construído apresenta-se, por sua vez, como um modelo simples de relógio equatorial e a partir dele pode-se construir outros modelos (horizontal e vertical)
Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
NASA Astrophysics Data System (ADS)
Starosolski, Zbigniew; Ezon, David S.; Krishnamurthy, Rajesh; Dodd, Nicholas; Heinle, Jeffrey; Mckenzie, Dean E.; Annapragada, Ananth
2017-03-01
We developed a technology that allows a simple desktop 3D printer with dual extruder to fabricate 3D flexible models of Major AortoPulmonary Collateral Arteries. The study was designed to assess whether the flexible 3D printed models could help during surgical planning phase. Simple FDM 3D printers are inexpensive, versatile in use and easy to maintain, but complications arise when the designed model is complex and has tubular structures with small diameter less than 2mm. The advantages of FDM printers are cost and simplicity of use. We use precisely selected materials to overcome the obstacles listed above. Dual extruder allows to use two different materials while printing, which is especially important in the case of fragile structures like pulmonary vessels and its supporting structures. The latter should not be removed by hand to avoid a truncation of the model. We utilize the water soluble PVA as a supporting structure and Poro-Lay filament for flexible model of AortoPulmonary collateral arteries. Poro-Lay filament is different as compared to all the other flexible ones like polymer-based. Poro-Lay is rigid while printing and this allows printing of structures small in diameter. It achieves flexibility after washing out of printed model with water. It becomes soft in touch and gelatinous. Using both PVA and Poro-Lay gives a huge advantage allowing to wash out the supporting structures and achieve flexibility in one washing operation, saving time and avoiding human error with cleaning the model. We evaluated 6 models for MAPCAS surgical planning study. This approach is also cost-effective - an average cost of materials for print is less than $15; models are printed in facility without any delays. Flexibility of 3D printed models approximate soft tissues properly, mimicking Aortopulmonary collateral arteries. Second utilization models has educational value for both residents and patients' family. Simplification of 3D flexible process could help in other models of soft tissue pathologies like aneurysms, ventricular septal defects and other vascular anomalies.
Extended inflation from higher dimensional theories
NASA Technical Reports Server (NTRS)
Holman, Richard; Kolb, Edward W.; Vadas, Sharon L.; Wang, Yun
1990-01-01
The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Principles of protein folding--a perspective from simple exact models.
Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.
1995-01-01
General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
Sukumaran, Jeet; Knowles, L Lacey
2018-06-01
The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
A simple generative model of collective online behavior.
Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix
2014-07-22
Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.
A simple generative model of collective online behavior
Gleeson, James P.; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A.; Reed-Tsochas, Felix
2014-01-01
Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates—even when using purely observational data without experimental design—that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Model for Predicting Passage of Invasive Fish Species Through Culverts
NASA Astrophysics Data System (ADS)
Neary, V.
2010-12-01
Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.
Hansen, D J; Toy, V M; Deininger, R A; Collopy, T K
1983-06-01
Three of the most popular microcomputers, the TRS-80 Model I, the APPLE II+, and the IBM Personal Computer were connected to a spirometer for data acquisition and analysis. Simple programs were written which allow the collection, analysis and storage of the data produced during spirometry. Three examples demonstrate the relative ease for automating spirometers.
Modeling Protein Domain Function
ERIC Educational Resources Information Center
Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth
2007-01-01
This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…
A Lesson on Evolution & Natural Selection
ERIC Educational Resources Information Center
Curtis, Anthony D.
2010-01-01
I describe three activities that allow students to explore the ideas of evolution, natural selection, extinction, mass extinction, and rates of evolutionary change by engaging a simple model using paper, pens, chalk, and a chalkboard. As a culminating activity that supports expository writing in the sciences, the students write an essay on mass…
A Classroom Simulation of Water-Rock Interaction for Upper-Level Geochemistry Courses.
ERIC Educational Resources Information Center
Cercone, Karen Rose
1988-01-01
Describes a simple hands-on model of water-rock interaction that can be constructed in the classroom using styrofoam bowls and foil-wrapped candies. This interactive simulation allows students to vary the factors which control water-rock interaction and to obtain immediate results. (Author/CW)
2016-06-01
widely in literature, limiting comparisons. Methods: Yorkshire-cross swine were anesthetized, instrumented, and splenectomized. A simple liver...applicable injury in swine . Use of the tourniquet allowed for consistent liver injury and precise control over hemorrhage.
A simple vibrating sample magnetometer for macroscopic samples
NASA Astrophysics Data System (ADS)
Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.
2018-03-01
We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.
Free-energy functional of the Debye-Hückel model of simple fluids
NASA Astrophysics Data System (ADS)
Piron, R.; Blenski, T.
2016-12-01
The Debye-Hückel approximation to the free energy of a simple fluid is written as a functional of the pair correlation function. This functional can be seen as the Debye-Hückel equivalent to the functional derived in the hypernetted chain framework by Morita and Hiroike, as well as by Lado. It allows one to obtain the Debye-Hückel integral equation through a minimization with respect to the pair correlation function, leads to the correct form of the internal energy, and fulfills the virial theorem.
The coefficient of restitution of pressurized balls: a mechanistic model
NASA Astrophysics Data System (ADS)
Georgallas, Alex; Landry, Gaëtan
2016-01-01
Pressurized, inflated balls used in professional sports are regulated so that their behaviour upon impact can be anticipated and allow the game to have its distinctive character. However, the dynamics governing the impacts of such balls, even on stationary hard surfaces, can be extremely complex. The energy transformations, which arise from the compression of the gas within the ball and from the shear forces associated with the deformation of the wall, are examined in this paper. We develop a simple mechanistic model of the dependence of the coefficient of restitution, e, upon both the gauge pressure, P_G, of the gas and the shear modulus, G, of the wall. The model is validated using the results from a simple series of experiments using three different sports balls. The fits to the data are extremely good for P_G > 25 kPa and consistent values are obtained for the value of G for the wall material. As far as the authors can tell, this simple, mechanistic model of the pressure dependence of the coefficient of restitution is the first in the literature. *%K Coefficient of Restitution, Dynamics, Inflated Balls, Pressure, Impact Model
Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A
2017-09-15
In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brunger, M. J.; Thorn, P. A.; Campbell, L.; Kato, H.; Kawahara, H.; Hoshino, M.; Tanaka, H.; Kim, Y.-K.
2008-05-01
We consider the efficacy of the BEf-scaling approach, in calculating reliable integral cross sections for electron impact excitation of dipole-allowed electronic states in molecules. We will demonstrate, using specific examples in H2, CO and H2O, that this relatively simple procedure can generate quite accurate integral cross sections which compare well with available experimental data. Finally, we will briefly consider the ramifications of this to atmospheric and other types of modelling studies.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
A Data-driven Approach for Forecasting Next-day River Discharge
NASA Astrophysics Data System (ADS)
Sharif, H. O.; Billah, K. S.
2017-12-01
This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.
Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology
Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.
2012-01-01
We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733
Strategy Space Exploration of a Multi-Agent Model for the Labor Market
NASA Astrophysics Data System (ADS)
de Grande, Pablo; Eguia, Manuel
We present a multi-agent system where typical labor market mechanisms emerge. Based on a few simple rules, our model allows for different interpretative paradigms to be represented and for different scenarios to be tried out. We thoroughly explore the space of possible strategies both for those unemployed and for companies and analyze the trade-off between these strategies regarding global social and economical indicators.
Animal Models of Corneal Injury
Chan, Matilda F.; Werb, Zena
2015-01-01
The cornea is an excellent model system to use for the analysis of wound repair because of its accessibility, lack of vascularization, and simple anatomy. Corneal injuries may involve only the superficial epithelial layer or may penetrate deeper to involve both the epithelial and stromal layers. Here we describe two well-established in vivo corneal wound models: a mechanical wound model that allows for the study of re-epithelialization and a chemical wound model that may be used to study stromal activation in response to injury (Stepp et al., 2014; Carlson et al., 2003). PMID:26191536
Optimal current waveforms for brushless permanent magnet motors
NASA Astrophysics Data System (ADS)
Moehle, Nicholas; Boyd, Stephen
2015-07-01
In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.
Meteorological adjustment of yearly mean values for air pollutant concentration comparison
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Neustadter, H. E.
1976-01-01
Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.
A Simplified Model for Detonation Based Pressure-Gain Combustors
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2010-01-01
A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.
Six-quark decays of the Higgs boson in supersymmetry with R-parity violation.
Carpenter, Linda M; Kaplan, David E; Rhee, Eun-Jung
2007-11-23
Both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer a mass of the Higgs boson less than the experimental lower limit (on a standard-model-like Higgs boson) of 114 GeV. We show that supersymmetric models with R parity violation and baryon-number violation have a significant range of parameter space in which the Higgs boson dominantly decays to six jets. These decays are much more weakly constrained by current CERN LEP analyses and would allow for a Higgs boson mass near that of the Z. In general, lighter scalar quark and other superpartner masses are allowed. The Higgs boson would potentially be discovered at hadron colliders via the appearance of new displaced vertices.
Probing the exchange statistics of one-dimensional anyon models
NASA Astrophysics Data System (ADS)
Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis
2018-05-01
We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.
Learning Abstract Physical Concepts from Experience: Design and Use of an RC Circuit
NASA Astrophysics Data System (ADS)
Parra, Alfredo; Ordenes, Jorge; de la Fuente, Milton
2018-05-01
Science learning for undergraduate students requires grasping a great number of theoretical concepts in a rather short time. In our experience, this is especially difficult when students are required to simultaneously use abstract concepts, mathematical reasoning, and graphical analysis, such as occurs when learning about RC circuits. We present a simple experimental model in this work that allows students to easily design, build, and analyze RC circuits, thus providing an opportunity to test personal ideas, build graphical descriptions, and explore the meaning of the respective mathematical models, ultimately gaining a better grasp of the concepts involved. The result suggests that the simple setup indeed helps untrained students to visualize the essential points of this kind of circuit.
A simple model for the evolution of a non-Abelian cosmic string network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, G.; Pieroni, M., E-mail: giancarlo.cella@pi.infn.it, E-mail: mauro.pieroni@apc.univ-paris7.fr
2016-06-01
In this paper we present the results of numerical simulations intended to study the behavior of non-Abelian cosmic strings networks. In particular we are interested in discussing the variations in the asymptotic behavior of the system as we variate the number of generators for the topological defects. A simple model which allows for cosmic strings is presented and its lattice discretization is discussed. The evolution of the generated cosmic string networks is then studied for different values for the number of generators for the topological defects. Scaling solution appears to be approached in most cases and we present an argumentmore » to justify the lack of scaling for the residual cases.« less
The Identities Hidden in the Matching Laws, and Their Uses
ERIC Educational Resources Information Center
Thorne, David R.
2010-01-01
Various theoretical equations have been proposed to predict response rate as a function of the rate of reinforcement. If both the rate and probability of reinforcement are considered, a simple identity, defining equation, or "law" holds. This identity places algebraic constraints on the allowable forms of our mathematical models and can help…
Developing a "Productive" Account of Young People's Transition Perspectives
ERIC Educational Resources Information Center
Vaughan, Karen; Roberts, Josie
2007-01-01
This article draws on the first two years of a longitudinal study of young people's pathway and career-related experiences and perspectives. It argues for a richer conceptualisation of young people's transition to study, training and employment than what simple school-to-labour market models allow. We present four clusters of young people's…
An Educational Model for Disruption of Bacteria for Protein Studies.
ERIC Educational Resources Information Center
Bhaduri, Saumya; Demchick, Paul H.
1984-01-01
A simple, rapid, and safe method has been developed for disrupting bacterial cells for protein studies. The method involved stepwise treatment of cells with acetone and with sodium dodecyl sulfate solution to allow extraction of cellular proteins for analysis by polyacrylamide gel electrophoresis. Applications for instructional purposes are noted.…
Earth Model with Laser Beam Simulating Seismic Ray Paths.
ERIC Educational Resources Information Center
Ryan, John Arthur; Handzus, Thomas Jay, Jr.
1988-01-01
Described is a simple device, that uses a laser beam to simulate P waves. It allows students to follow ray paths, reflections and refractions within the earth. Included is a set of exercises that lead students through the steps by which the presence of the outer and inner cores can be recognized. (Author/CW)
The Systemic Vision of the Educational Learning
ERIC Educational Resources Information Center
Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas
2012-01-01
As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
Theoretical aspects of tidal and planetary wave propagation at thermospheric heights
NASA Technical Reports Server (NTRS)
Volland, H.; Mayr, H. G.
1977-01-01
A simple semiquantitative model is presented which allows analytic solutions of tidal and planetary wave propagation at thermospheric heights. This model is based on perturbation approximation and mode separation. The effects of viscosity and heat conduction are parameterized by Rayleigh friction and Newtonian cooling. Because of this simplicity, one gains a clear physical insight into basic features of atmospheric wave propagation. In particular, we discuss the meridional structures of pressure and horizontal wind (the solutions of Laplace's equation) and their modification due to dissipative effects at thermospheric heights. Furthermore, we solve the equations governing the height structure of the wave modes and arrive at a very simple asymptotic solution valid in the upper part of the thermosphere. That 'system transfer function' of the thermosphere allows one to estimate immediately the reaction of the thermospheric wave mode parameters such as pressure, temperature, and winds to an external heat source of arbitrary temporal and spatial distribution. Finally, the diffusion effects of the minor constituents due to the global wind circulation are discussed, and some results of numerical calculations are presented.
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.
Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
Engineering model for ultrafast laser microprocessing
NASA Astrophysics Data System (ADS)
Audouard, E.; Mottay, E.
2016-03-01
Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.
NASA Astrophysics Data System (ADS)
Valencia, Hubert; Kangawa, Yoshihiro; Kakimoto, Koichi
2015-12-01
GaAs(100) c(4×4) surfaces were examined by ab initio calculations, under As2, H2 and N2 gas mixed conditions as a model for GaAs1-xNx vapor-phase epitaxy (VPE) on GaAs(100). Using a simple model consisting of As2 and H2 molecules adsorptions and As/N atom substitutions, it was shown to be possible to examine the crystal growth behavior considering the relative stability of the resulting surfaces against the chemical potential of As2, H2 and N2 gases. Such simple model allows us to draw a picture of the temperature and pressure stability domains for each surfaces that can be linked to specific growth conditions, directly. We found that, using this simple model, it is possible to explain the different N-incorporation regimes observed experimentally at different temperatures, and to predict the transition temperature between these regimes. Additionally, a rational explanation of N-incorporation ratio for each of these regimes is provided. Our model should then lead to a better comprehension and control of the experimental conditions needed to realize a high quality VPE of GaAs1-xNx.
A minimalist feedback-regulated model for galaxy formation during the epoch of reionization
NASA Astrophysics Data System (ADS)
Furlanetto, Steven R.; Mirocha, Jordan; Mebane, Richard H.; Sun, Guochao
2017-12-01
Near-infrared surveys have now determined the luminosity functions of galaxies at 6 ≲ z ≲ 8 to impressive precision and identified a number of candidates at even earlier times. Here, we develop a simple analytic model to describe these populations that allows physically motivated extrapolation to earlier times and fainter luminosities. We assume that galaxies grow through accretion on to dark matter haloes, which we model by matching haloes at fixed number density across redshift, and that stellar feedback limits the star formation rate. We allow for a variety of feedback mechanisms, including regulation through supernova energy and momentum from radiation pressure. We show that reasonable choices for the feedback parameters can fit the available galaxy data, which in turn substantially limits the range of plausible extrapolations of the luminosity function to earlier times and fainter luminosities: for example, the global star formation rate declines rapidly (by a factor of ∼20 from z = 6 to 15 in our fiducial model), but the bright galaxies accessible to observations decline even faster (by a factor ≳ 400 over the same range). Our framework helps us develop intuition for the range of expectations permitted by simple models of high-z galaxies that build on our understanding of 'normal' galaxy evolution. We also provide predictions for galaxy measurements by future facilities, including James Webb Space Telescope and Wide-Field Infrared Survey Telescope.
Modeling of two-phase porous flow with damage
NASA Astrophysics Data System (ADS)
Cai, Z.; Bercovici, D.
2009-12-01
Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.
Correlation of spacecraft thermal mathematical models to reference data
NASA Astrophysics Data System (ADS)
Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier
2018-03-01
Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
Nonlinear field equations for aligning self-propelled rods.
Peshkov, Anton; Aranson, Igor S; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-12-28
We derive a set of minimal and well-behaved nonlinear field equations describing the collective properties of self-propelled rods from a simple microscopic starting point, the Vicsek model with nematic alignment. Analysis of their linear and nonlinear dynamics shows good agreement with the original microscopic model. In particular, we derive an explicit expression for density-segregated, banded solutions, allowing us to develop a more complete analytic picture of the problem at the nonlinear level.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...
2017-08-23
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
NASA Astrophysics Data System (ADS)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.
2017-08-01
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Modification of the Simons model for calculation of nonradial expansion plumes
NASA Technical Reports Server (NTRS)
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The Simons model is a simple model for calculating the expansion plumes of rockets and thrusters and is a widely used engineering tool for the determination of spacecraft impingement effects. The model assumes that the density of the plume decreases radially from the nozzle exit. Although a high degree of success has been achieved in modeling plumes with moderate Mach numbers, the accuracy obtained under certain conditions is unsatisfactory. A modification made to the model that allows effective description of nonradial behavior in plumes is presented, and the conditions under which its use is preferred are prescribed.
Computer memory management system
Kirk, III, Whitson John
2002-01-01
A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.
Vertical and pitching resonance of train cars moving over a series of simple beams
NASA Astrophysics Data System (ADS)
Yang, Y. B.; Yau, J. D.
2015-02-01
The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.
Analytical model for minority games with evolutionary learning
NASA Astrophysics Data System (ADS)
Campos, Daniel; Méndez, Vicenç; Llebot, Josep E.; Hernández, Germán A.
2010-06-01
In a recent work [D. Campos, J.E. Llebot, V. Méndez, Theor. Popul. Biol. 74 (2009) 16] we have introduced a biological version of the Evolutionary Minority Game that tries to reproduce the intraspecific competition for limited resources in an ecosystem. In comparison with the complex decision-making mechanisms used in standard Minority Games, only two extremely simple strategies ( juveniles and adults) are accessible to the agents. Complexity is introduced instead through an evolutionary learning rule that allows younger agents to learn taking better decisions. We find that this game shows many of the typical properties found for Evolutionary Minority Games, like self-segregation behavior or the existence of an oscillation phase for a certain range of the parameter values. However, an analytical treatment becomes much easier in our case, taking advantage of the simple strategies considered. Using a model consisting of a simple dynamical system, the phase diagram of the game (which differentiates three phases: adults crowd, juveniles crowd and oscillations) is reproduced.
[Modelling of phosphorus transfers during haemodialysis].
Chazot, Guillaume; Lemoine, Sandrine; Juillard, Laurent
2017-04-01
Chronic kidney disease causes hyperphosphatemia, which is associated with increased cardiovascular risk and mortality. In patients with end-stage renal disease, haemodialysis allows the control of hyperphosphatemia. During a 4-h haemodialysis session, between 600 and 700mg of phosphate are extracted from the plasma, whereas the latter contains only 90mg of inorganic phosphate. The precise origin of phosphates remains unknown. The modelling of phosphorus transfers allows to predict the outcome after changes in dialysis prescription (duration, frequency) with simple two-compartment models and to describe the transfers between the different body compartments with more complex models. Work using 31 P nuclear magnetic resonance spectroscopy performed in animals showed an increase in intracellular phosphate concentration and a decrease in intracellular ATP during a haemodialysis session suggesting an intracellular origin of phosphates. Copyright © 2017. Published by Elsevier Masson SAS.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data
Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.
Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.
Accidental inflation from Kähler uplifting
NASA Astrophysics Data System (ADS)
Ben-Dayan, Ido; Jing, Shenglin; Westphal, Alexander; Wieck, Clemens
2014-03-01
We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ``left-rolling" ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ``inflation by deflation" (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ~ 0.012 and enhanced tensor-to-scalar ratio r ~ 10-5. The latter results differ significantly from many string inflation models.
Shell model for drag reduction with polymer additives in homogeneous turbulence.
Benzi, Roberto; De Angelis, Elisabetta; Govindarajan, Rama; Procaccia, Itamar
2003-07-01
Recent direct numerical simulations of the finite-extensibility nonlinear elastic dumbbell model with the Peterlin approximation of non-Newtonian hydrodynamics revealed that the phenomenon of drag reduction by polymer additives exists (albeit in reduced form) also in homogeneous turbulence. We use here a simple shell model for homogeneous viscoelastic flows, which recaptures the essential observations of the full simulations. The simplicity of the shell model allows us to offer a transparent explanation of the main observations. It is shown that the mechanism for drag reduction operates mainly on large scales. Understanding the mechanism allows us to predict how the amount of drag reduction depends on the various parameters in the model. The main conclusion is that drag reduction is not a universal phenomenon; it peaks in a window of parameters such as the Reynolds number and the relaxation rate of the polymer.
In silico method for modelling metabolism and gene product expression at genome scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome andmore » transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.« less
DEAN: A program for dynamic engine analysis
NASA Technical Reports Server (NTRS)
Sadler, G. G.; Melcher, K. J.
1985-01-01
The Dynamic Engine Analysis program, DEAN, is a FORTRAN code implemented on the IBM/370 mainframe at NASA Lewis Research Center for digital simulation of turbofan engine dynamics. DEAN is an interactive program which allows the user to simulate engine subsystems as well as a full engine systems with relative ease. The nonlinear first order ordinary differential equations which define the engine model may be solved by one of four integration schemes, a second order Runge-Kutta, a fourth order Runge-Kutta, an Adams Predictor-Corrector, or Gear's method for still systems. The numerical data generated by the model equations are displayed at specified intervals between which the user may choose to modify various parameters affecting the model equations and transient execution. Following the transient run, versatile graphics capabilities allow close examination of the data. DEAN's modeling procedure and capabilities are demonstrated by generating a model of simple compressor rig.
Using topographic networks to build a representation of consciousness.
Tinsley, Chris J
2008-04-01
The subject of consciousness has intrigued both psychologists and neuroscientists for many years. Recently, following many recent advances in the emerging field of cognitive neuroscience, there is the possibility that this fundamental process may soon be explained. In particular, there have been dramatic insights gained into the mechanisms of attention, cognition and perception in recent decades. Here, simple network models are proposed which are used to create a representation of consciousness. The models are inspired by the structure of the thalamus and all incorporate topographic layers in their structure. Operation of the models allows filtering of the information reaching the representation according to (1) modality and/or (2) sub-modality, in addition several of the models allowing filtering at the topographic level. The models presented have different structures and employ different integrative mechanisms to produce gating or amplification at different levels; the resultant representations of consciousness are discussed.
NASA Astrophysics Data System (ADS)
Donker, N. H. W.
2001-01-01
A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.
Using McStas for modelling complex optics, using simple building bricks
NASA Astrophysics Data System (ADS)
Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim
2011-04-01
The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.
NASA Technical Reports Server (NTRS)
Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.
1997-01-01
A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J
The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less
A simple model of the effect of ocean ventilation on ocean heat uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Urban, Nathan Mark
Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less
Single Axis Attitude Control and DC Bus Regulation with Two Flywheels
NASA Technical Reports Server (NTRS)
Kascak, Peter E.; Jansen, Ralph H.; Kenny, Barbara; Dever, Timothy P.
2002-01-01
A computer simulation of a flywheel energy storage single axis attitude control system is described. The simulation models hardware which will be experimentally tested in the future. This hardware consists of two counter rotating flywheels mounted to an air table. The air table allows one axis of rotational motion. An inertia DC bus coordinator is set forth that allows the two control problems, bus regulation and attitude control, to be separated. Simulation results are presented with a previously derived flywheel bus regulator and a simple PID attitude controller.
ERIC Educational Resources Information Center
Art, Albert
2006-01-01
A model lift containing a figure of Albert Einstein is released from the side of a tall building and its free fall is arrested by elastic ropes. This arrangement allows four simple experiments to be conducted in the lift to demonstrate the effects of free fall and show how they can lead to the concept of the equivalence of inertial and…
Structural stocking guides: a new look at an old friend
Jeffrey H. Gove
2004-01-01
A parameter recovery-based model is developed that allows the incorporation of diameter distribution information directly into stocking guides. The method is completely general in applicability across different guides and forest types and could be adapted to other systems such as density management diagrams. It relies on a simple measure of diameter distribution shape...
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
A study of stiffness, residual strength and fatigue life relationships for composite laminates
NASA Technical Reports Server (NTRS)
Ryder, J. T.; Crossman, F. W.
1983-01-01
Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.
Cohen, Timothy; Craig, Nathaniel; Knapen, Simon
2016-03-15
We propose a simple model of split supersymmetry from gauge mediation. This model features gauginos that are parametrically a loop factor lighter than scalars, accommodates a Higgs boson mass of 125 GeV, and incorporates a simple solution to the μ–b μ problem. The gaugino mass suppression can be understood as resulting from collective symmetry breaking. Imposing collider bounds on μ and requiring viable electroweak symmetry breaking implies small a-terms and small tan β — the stop mass ranges from 10 5 to 10 8 GeV. In contrast with models with anomaly + gravity mediation (which also predict a one-loop loopmore » suppression for gaugino masses), our gauge mediated scenario predicts aligned squark masses and a gravitino LSP. Gluinos, electroweakinos and Higgsinos can be accessible at the LHC and/or future colliders for a wide region of the allowed parameter space.« less
Kim, Y S; Balland, V; Limoges, B; Costentin, C
2017-07-21
Cyclic voltammetry is a particularly useful tool for characterizing charge accumulation in conductive materials. A simple model is presented to evaluate proton transport effects on charge storage in conductive materials associated with a redox process coupled with proton insertion in the bulk material from an aqueous buffered solution, a situation frequently encountered in metal oxide materials. The interplay between proton transport inside and outside the materials is described using a formulation of the problem through introduction of dimensionless variables that allows defining the minimum number of parameters governing the cyclic voltammetry response with consideration of a simple description of the system geometry. This approach is illustrated by analysis of proton insertion in a mesoporous TiO 2 film.
Large deviation analysis of a simple information engine
NASA Astrophysics Data System (ADS)
Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.
2015-11-01
Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.
Glassy Behavior due to Kinetic Constraints: from Topological Foam to Backgammon
NASA Astrophysics Data System (ADS)
Sherrington, David
A study is reported of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained kinetics of a type initially suggested by topological considerations of foams and two-dimensional covalent glasses. It is demonstrated that oscopic dynamical features characteristic of real glasses, such as two-time decays in energy and auto-correlation functions, arise and may be understood in terms of annihilation-diffusion concepts and theory. This recognition leads to a sequence of further models which (i) encapsulate the essense but are more readily simulated and open to easier analytic study, and (ii) allow generalization and extension to higher dimension. Fluctuation-dissipation relations are also considered and show novel aspects. The comparison is with strong glasses.
NASA Technical Reports Server (NTRS)
Metscher, Jonathan F.; Lewandowski, Edward J.
2013-01-01
A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Glueball spectra from a matrix model of pure Yang-Mills theory
NASA Astrophysics Data System (ADS)
Acharyya, Nirmalendu; Balachandran, A. P.; Pandey, Mahul; Sanyal, Sambuddha; Vaidya, Sachindeo
2018-05-01
We present variational estimates for the low-lying energies of a simple matrix model that approximates SU(3) Yang-Mills theory on a three-sphere of radius R. By fixing the ground state energy, we obtain the (integrated) renormalization group (RG) equation for the Yang-Mills coupling g as a function of R. This RG equation allows to estimate the mass of other glueball states, which we find to be in excellent agreement with lattice simulations.
NASA Technical Reports Server (NTRS)
Englert, G. W.
1971-01-01
A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.
NASA Astrophysics Data System (ADS)
Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph
2012-11-01
Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.
Using simple agent-based modeling to inform and enhance neighborhood walkability.
Badland, Hannah; White, Marcus; Macaulay, Gus; Eagleson, Serryn; Mavoa, Suzanne; Pettit, Christopher; Giles-Corti, Billie
2013-12-11
Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).
Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2016-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.
Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2015-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.
Overview and extensions of a system for routing directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.
NASA Astrophysics Data System (ADS)
Laje, Rodrigo; Mindlin, Gabriel B.
2002-12-01
We present a model for the activities of neural circuits in a nucleus found in the brains of songbirds: the robust nucleus of the archistriatum (RA). This is a fore brain song control nucleus responsible for the phasic and precise neural signals driving vocal and respiratory motor neurons during singing. Driving a physical model of the avian vocal organ with the signals generated by the neural model, we produce synthetic songs. This allows us to show that certain connectivity architectures in the RA give rise to a wide range of different vocalizations under simple excitatory instructions.
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
Huang, Kuan-Chun; White, Ryan J
2013-08-28
We develop a random walk model to simulate the Brownian motion and the electrochemical response of a single molecule confined to an electrode surface via a flexible molecular tether. We use our simple model, which requires no prior knowledge of the physics of the molecular tether, to predict and better understand the voltammetric response of surface-confined redox molecules when motion of the redox molecule becomes important. The single molecule is confined to a hemispherical volume with a maximum radius determined by the flexible molecular tether (5-20 nm) and is allowed to undergo true three-dimensional diffusion. Distance- and potential-dependent electron transfer probabilities are evaluated throughout the simulations to generate cyclic voltammograms of the model system. We find that at sufficiently slow cyclic voltammetric scan rates the electrochemical reaction behaves like an adsorbed redox molecule with no mass transfer limitation; thus, the peak current is proportional to the scan rate. Conversely, at faster scan rates the diffusional motion of the molecule limits the simulated peak current, which exhibits a linear dependence on the square root of the scan rate. The switch between these two limiting regimes occurs when the diffusion layer thickness, (2Dt)(1/2), is ~10 times the tether length. Finally, we find that our model predicts the voltammetric behavior of a redox-active methylene blue tethered to an electrode surface via short flexible single-stranded, polythymine DNAs, allowing the estimation of diffusion coefficients for the end-tethered molecule.
Vantourout, Julien C; Miras, Haralampos N; Isidro-Llobet, Albert; Sproules, Stephen; Watson, Allan J B
2017-04-05
We report an investigation of the Chan-Lam amination reaction. A combination of spectroscopy, computational modeling, and crystallography has identified the structures of key intermediates and allowed a complete mechanistic description to be presented, including off-cycle inhibitory processes, the source of amine and organoboron reactivity issues, and the origin of competing oxidation/protodeboronation side reactions. Identification of key mechanistic events has allowed the development of a simple solution to these issues: manipulating Cu(I) → Cu(II) oxidation and exploiting three synergistic roles of boric acid has allowed the development of a general catalytic Chan-Lam amination, overcoming long-standing and unsolved amine and organoboron limitations of this valuable transformation.
Simple Model for Identifying Critical Regions in Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.
2015-01-01
Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.
A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid
2017-10-01
A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.
Limit sets for natural extensions of Schelling’s segregation model
NASA Astrophysics Data System (ADS)
Singh, Abhinav; Vainchtein, Dmitri; Weiss, Howard
2011-07-01
Thomas Schelling developed an influential demographic model that illustrated how, even with relatively mild assumptions on each individual's nearest neighbor preferences, an integrated city would likely unravel to a segregated city, even if all individuals prefer integration. Individuals in Schelling's model cities are divided into two groups of equal number and each individual is "happy" or "unhappy" when the number of similar neighbors cross a simple threshold. In this manuscript we consider natural extensions of Schelling's original model to allow the two groups have different sizes and to allow different notions of happiness of an individual. We observe that differences in aggregation patterns of majority and minority groups are highly sensitive to the happiness threshold; for low threshold, the differences are small, and when the threshold is raised, striking new patterns emerge. We also observe that when individuals strongly prefer to live in integrated neighborhoods, the final states exhibit a new tessellated-like structure.
'Home made' model to study the greenhouse effect and global warming
NASA Astrophysics Data System (ADS)
Onorato, P.; Mascheretti, P.; DeAmbrosis, A.
2011-03-01
In this paper a simplified two-parameter model of the greenhouse effect on the Earth is developed, starting from the well known two-layer model. It allows both the analysis of the temperatures of the inner planets, by focusing on the role of the greenhouse effect, and a comparison between the temperatures the planets should have in the absence of greenhouse effect and their actual ones. It may also be used to predict the average temperature of the Earth surface in the future, depending on the variations of the concentration of greenhouse gases in the atmosphere due to human activities. This model can promote an elementary understanding of global warming since it allows a simple formalization of the energy balance for the Earth in the stationary condition, in the presence of greenhouse gases. For these reasons it can be introduced in courses for undergraduate physics students and for teacher preparation.
An improved model for teaching use of electronic apex locators.
Tchorz, J P; Hellwig, E; Altenburger, M J
2012-04-01
To develop a simple, practical and inexpensive model, which enables the use of electronic apex locators (EALs) during pre-clinical and continuing education. Extracted teeth were placed in a mould and embedded in acrylic resin. The resin was applied in two consecutive steps to form a cavity around the root apices. A closable plastic tube serves as a valve, and a steel wire connects to the EAL. With its semi-closed reservoir for conductive fluids surrounding the root apices, the new model enables working length measurements of root canals using EALs. The model simulates the clinical situation for endodontic teaching purposes, as it allows working length determination of root canals as recommended. The measuring results of the EAL can be verified by radiography. At the same time, the roots are not directly visible and accessible to the user, allowing a precise evaluation and grading of the treatment. © 2011 International Endodontic Journal.
Simulator of human visual perception
NASA Astrophysics Data System (ADS)
Bezzubik, Vitalii V.; Belashenkov, Nickolai R.
2016-04-01
Difference of Circs (DoC) model allowing to simulate the response of neurons - ganglion cells as a reaction to stimuli is represented and studied in relation with representation of receptive fields of human retina. According to this model the response of neurons is reduced to execution of simple arithmetic operations and the results of these calculations well correlate with experimental data in wide range of stimuli parameters. The simplicity of the model and reliability of reproducing of responses allow to propose the conception of a device which can simulate the signals generated by ganglion cells as a reaction to presented stimuli. The signals produced according to DoC model are considered as a result of primary processing of information received from receptors independently of their type and may be sent to higher levels of nervous system of living creatures for subsequent processing. Such device may be used as a prosthesis for disabled organ.
NASA Astrophysics Data System (ADS)
Priego-Roche, Luz-María; Rieu, Dominique; Front, Agnès
Nowadays, organizations aiming to be successful in an increasingly competitive market tend to group together into virtual organizations. Designing the information system (IS) of such virtual organizations on the basis of the IS of those participating is a real challenge. The IS of a virtual organization plays an important role in the collaboration and cooperation of the participants organizations and in reaching the common goal. This article proposes criteria allowing virtual organizations to be identified and classified at an intentional level, as well as the information necessary for designing the organizations’ IS. Instantiation of criteria for a specific virtual organization and its participants, will allow simple graphical models to be generated in a modelling tool. The models will be used as bases for the IS design at organizational and operational levels. The approach is illustrated by the example of the virtual organization UGRT (a regional stockbreeders union in Tabasco, Mexico).
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
Low Order Modeling Tools for Preliminary Pressure Gain Combustion Benefits Analyses
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2012-01-01
Pressure gain combustion (PGC) offers the promise of higher thermodynamic cycle efficiency and greater specific power in propulsion and power systems. This presentation describes a model, developed under a cooperative agreement between NASA and AFRL, for preliminarily assessing the performance enhancement and preliminary size requirements of PGC components either as stand-alone thrust producers or coupled with surrounding turbomachinery. The model is implemented in the Numerical Propulsion Simulation System (NPSS) environment allowing various configurations to be examined at numerous operating points. The validated model is simple, yet physics-based. It executes quickly in NPSS, yet produces realistic results.
Okada, Morihiro; Miller, Thomas C; Roediger, Julia; Shi, Yun-Bo; Schech, Joseph Mat
2017-09-01
Various animal models are indispensible in biomedical research. Increasing awareness and regulations have prompted the adaptation of more humane approaches in the use of laboratory animals. With the development of easier and faster methodologies to generate genetically altered animals, convenient and humane methods to genotype these animals are important for research involving such animals. Here, we report skin swabbing as a simple and noninvasive method for extracting genomic DNA from mice and frogs for genotyping. We show that this method is highly reliable and suitable for both immature and adult animals. Our approach allows a simpler and more humane approach for genotyping vertebrate animals.
Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A
2015-11-01
The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
Sound transmission through lightweight double-leaf partitions: theoretical modelling
NASA Astrophysics Data System (ADS)
Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.
2005-09-01
This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.
On Cellular Darwinism: Mitochondria.
Bull, Larry
2016-01-01
The significant role of mitochondria within cells is becoming increasingly clear. This letter uses the NKCS model of coupled fitness landscapes to explore aspects of organelle-nucleus coevolution. The phenomenon of mitochondrial diversity is allowed to emerge under a simple intracellular evolutionary process, including varying the relative rate of evolution by the organelle. It is shown how the conditions for the maintenance of more than one genetic variant of mitochondria are similar to those previously suggested as needed for the original symbiotic origins of the relationship using the NKCS model.
NASA Astrophysics Data System (ADS)
Tejeda, E.
2018-04-01
We present a simple, analytic model of an incompressible fluid accreting onto a moving gravitating object. This solution allows us to probe the highly subsonic regime of wind accretion. Moreover, it corresponds to the Newtonian limit of a previously known relativistic model of a stiff fluid accreting onto a black hole. Besides filling this blank in the literature, the new solution should be useful as a benchmark test for numerical hydrodynamics codes. Given its simplicity, it can also be used as an illustrative example in a gas dynamics course.
2017-02-08
cost benefit of the technology. 7.1 COST MODEL A simple cost model for the technology is presented so that a remediation professional can understand...reporting costs . The benefit of the qPCR analyses is that they allow the user to determine if aerobic cometabolism is possible. Because the PHE and...of Chlorinated Ethylenes February 2017 This document has been cleared for public release; Distribution Statement A Page Intentionally Left
NASA Astrophysics Data System (ADS)
Follum, Michael L.; Niemann, Jeffrey D.; Parno, Julie T.; Downer, Charles W.
2018-05-01
Frozen ground can be important to flood production and is often heterogeneous within a watershed due to spatial variations in the available energy, insulation by snowpack and ground cover, and the thermal and moisture properties of the soil. The widely used continuous frozen ground index (CFGI) model is a degree-day approach and identifies frozen ground using a simple frost index, which varies mainly with elevation through an elevation-temperature relationship. Similarly, snow depth and its insulating effect are also estimated based on elevation. The objective of this paper is to develop a model for frozen ground that (1) captures the spatial variations of frozen ground within a watershed, (2) allows the frozen ground model to be incorporated into a variety of watershed models, and (3) allows application in data sparse environments. To do this, we modify the existing CFGI method within the gridded surface subsurface hydrologic analysis watershed model. Among the modifications, the snowpack and frost indices are simulated by replacing air temperature (a surrogate for the available energy) with a radiation-derived temperature that aims to better represent spatial variations in available energy. Ground cover is also included as an additional insulator of the soil. Furthermore, the modified Berggren equation, which accounts for soil thermal conductivity and soil moisture, is used to convert the frost index into frost depth. The modified CFGI model is tested by application at six test sites within the Sleepers River experimental watershed in Vermont. Compared to the CFGI model, the modified CFGI model more accurately captures the variations in frozen ground between the sites, inter-annual variations in frozen ground depths at a given site, and the occurrence of frozen ground.
Can we estimate the cellular phone RF peak output power with a simple experiment?
NASA Astrophysics Data System (ADS)
Fioreze, Maycon; dos Santos Junior, Sauli; Goncalves Hönnicke, Marcelo
2016-07-01
Cellular phones are becoming increasingly useful tools for students. Since cell phones operate in the microwave bandwidth, they can be used to motivate students to demonstrate and better understand the properties of electromagnetic waves. However, since these waves operate at higher frequencies (L-band, from 800 MHz to 2 GHz) it is not simple to detect them. Usually, expensive real-time high frequency oscilloscopes are required. Indirect measurements are also possible through heat-based and diode-detector-based radio-frequency (RF) power sensors. Another didactic and intuitive way is to explore a simple and inexpensive detection system, based on the interference effect caused in the electronic circuit of TV and PC soundspeakers, and to try to investigate different properties of the cell phones’ RF electromagnetic waves, such as its power and modulated frequency. This manuscript proposes a trial to quantify these measurements, based on a simple Friis equation model and the time constant of the circuit used in the detection system, in order to show it didactically to the students and even allow them also to explore such a simple detection system at home.
What's Next: Recruitment of a Grounded Predictive Body Model for Planning a Robot's Actions.
Schilling, Malte; Cruse, Holk
2012-01-01
Even comparatively simple, reactive systems are able to control complex motor tasks, such as hexapod walking on unpredictable substrate. The capability of such a controller can be improved by introducing internal models of the body and of parts of the environment. Such internal models can be applied as inverse models, as forward models or to solve the problem of sensor fusion. Usually, separate models are used for these functions. Furthermore, separate models are used to solve different tasks. Here we concentrate on internal models of the body as the brain considers its own body the most important part of the world. The model proposed is formed by a recurrent neural network with the property of pattern completion. The model shows a hierarchical structure but nonetheless comprises a holistic system. One and the same model can be used as a forward model, as an inverse model, for sensor fusion, and, with a simple expansion, as a model to internally simulate (new) behaviors to be used for prediction. The model embraces the geometrical constraints of a complex body with many redundant degrees of freedom, and allows finding geometrically possible solutions. To control behavior such as walking, climbing, or reaching, this body model is complemented by a number of simple reactive procedures together forming a procedural memory. In this article, we illustrate the functioning of this network. To this end we present examples for solutions of the forward function and the inverse function, and explain how the complete network might be used for predictive purposes. The model is assumed to be "innate," so learning the parameters of the model is not (yet) considered.
Development of the Concept of Energy Conservation using Simple Experiments for Grade 10 Students
NASA Astrophysics Data System (ADS)
Rachniyom, S.; Toedtanya, K.; Wuttiprom, S.
2017-09-01
The purpose of this research was to develop students’ concept of and retention rate in relation to energy conservation. Activities included simple and easy experiments that considered energy transformation from potential to kinetic energy. The participants were 30 purposively selected grade 10 students in the second semester of the 2016 academic year. The research tools consisted of learning lesson plans and a learning achievement test. Results showed that the experiments worked well and were appropriate as learning activities. The students’ achievement scores significantly increased at the statistical level of 05, the students’ retention rates were at a high level, and learning behaviour was at a good level. These simple experiments allowed students to learn to demonstrate to their peers and encouraged them to use familiar models to explain phenomena in daily life.
HIA, the next step: Defining models and roles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putters, Kim
If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
Gravitational decoupled anisotropies in compact stars
NASA Astrophysics Data System (ADS)
Gabbanelli, Luciano; Rincón, Ángel; Rubio, Carlos
2018-05-01
Simple generic extensions of isotropic Durgapal-Fuloria stars to the anisotropic domain are presented. These anisotropic solutions are obtained by guided minimal deformations over the isotropic system. When the anisotropic sector interacts in a purely gravitational manner, the conditions to decouple both sectors by means of the minimal geometric deformation approach are satisfied. Hence the anisotropic field equations are isolated resulting a more treatable set. The simplicity of the equations allows one to manipulate the anisotropies that can be implemented in a systematic way to obtain different realistic models for anisotropic configurations. Later on, observational effects of such anisotropies when measuring the surface redshift are discussed. To conclude, the consistency of the application of the method over the obtained anisotropic configurations is shown. In this manner, different anisotropic sectors can be isolated of each other and modeled in a simple and systematic way.
Characterization of biofilms with a fiber optic spectrometer
NASA Astrophysics Data System (ADS)
Krautwald, S.; Tonyali, A.; Fellerhoff, B.; Franke, Hilmar; Tamachkiarov, A.; Griebe, T.; Flemming, H. C.
2000-12-01
Optical sensing is one promising approach to monitor bioflims in an early stage. Generally, natural bioflims are quite inhomogeneous, therefore we start the investigation with suspensions of dead bacteria in water as a simple model for a bioflim. An experimental arrangement based on a white light fiber optic spectrometer is used for measuring the density of a thin film with a local resolution in the order of several tim. The method is applied with model biofilms. In a computer controlled procedure reflectance spectra may be recorded at different positions in the x-y plane. Scanning through thin suspension regions of bacteria between glass plates allows an estimation of the refractive index of bacteria. Taking advantage of the light collecting property of the glass substrate a simple measurement of the fluorescence with local resolution is demonstrated as well.
Simulation studies of self-organization of microtubules and molecular motors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, Z.; Karpeev, D.; Aranson, I. S.
We perform Monte Carlo type simulation studies of self-organization of microtubules interacting with molecular motors. We model microtubules as stiff polar rods of equal length exhibiting anisotropic diffusion in the plane. The molecular motors are implicitly introduced by specifying certain probabilistic collision rules resulting in realignment of the rods. This approximation of the complicated microtubule-motor interaction by a simple instant collision allows us to bypass the 'computational bottlenecks' associated with the details of the diffusion and the dynamics of motors and the reorientation of microtubules. Consequently, we are able to perform simulations of large ensembles of microtubules and motors onmore » a very large time scale. This simple model reproduces all important phenomenology observed in in vitro experiments: Formation of vortices for low motor density and raylike asters and bundles for higher motor density.« less
Deciphering mRNA Sequence Determinants of Protein Production Rate
NASA Astrophysics Data System (ADS)
Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen
2018-03-01
One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.
Model of Pressure Distribution in Vortex Flow Controls
NASA Astrophysics Data System (ADS)
Mielczarek, Szymon; Sawicki, Jerzy M.
2015-06-01
Vortex valves belong to the category of hydrodynamic flow controls. They are important and theoretically interesting devices, so complex from hydraulic point of view, that probably for this reason none rational concept of their operation has been proposed so far. In consequence, functioning of vortex valves is described by CFD-methods (computer-aided simulation of technical objects) or by means of simple empirical relations (using discharge coefficient or hydraulic loss coefficient). Such rational model of the considered device is proposed in the paper. It has a simple algebraic form, but is well grounded physically. The basic quantitative relationship, which describes the valve operation, i.e. dependence between the flow discharge and the circumferential pressure head, caused by the rotation, has been verified empirically. Conformity between calculated and measured parameters of the device allows for acceptation of the proposed concept.
Evolutionary synthesis of simple stellar populations. Colours and indices
NASA Astrophysics Data System (ADS)
Kurth, O. M.; Fritze-v. Alvensleben, U.; Fricke, K. J.
1999-07-01
We construct evolutionary synthesis models for simple stellar populations using the evolutionary tracks from the Padova group (1993, 1994), theoretical colour calibrations from \\cite[Lejeune et al. (1997, 1998)]{lejeune} and fit functions for stellar atmospheric indices from \\cite[Worthey et al. (1994)]{worthey}. A Monte-Carlo technique allows us to obtain a smooth time evolution of both broad band colours in UBVRIK and a series of stellar absorption features for Single Burst Stellar Populations (SSPs). We present colours and indices for SSPs with ages from 1 \\ 10(9) yrs to 1.6 \\ 10(10) yrs and metallicities [M/H]=-2.3, -1.7, -0.7, -0.4, 0.0 and 0.4. Model colours and indices at an age of about a Hubble time are in good agreement with observed colours and indices of the Galactic and M 31 GCs.
Optimized theory for simple and molecular fluids.
Marucho, M; Montgomery Pettitt, B
2007-03-28
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.
Emergence of power-law in a market with mixed models
NASA Astrophysics Data System (ADS)
Ali Saif, M.; Gade, Prashant M.
2007-10-01
We investigate the problem of wealth distribution from the viewpoint of asset exchange. Robust nature of Pareto's law across economies, ideologies and nations suggests that this could be an outcome of trading strategies. However, the simple asset exchange models fail to reproduce this feature. A Yardsale (YS) model in which amount put on the bet is a fraction of minimum of the two players leads to condensation of wealth in hands of some agent while theft and fraud (TF) model in which the amount to be exchanged is a fraction of loser's wealth leads to an exponential distribution of wealth. We show that if we allow few agents to follow a different model than others, i.e., there are some agents following TF model while rest follow YS model, it leads to distribution with power-law tails. Similar effect is observed when one carries out transactions for a fraction of one's wealth using TF model and for the rest YS model is used. We also observe a power-law tail in wealth distribution if we allow the agents to follow either of the models with some probability.
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Banas, W.; Sekala, A.; Foit, K.; Hryniewicz, P.; Kost, G.
2015-11-01
Process of workcell designing is limited by different constructional requirements. They are related to technological parameters of manufactured element, to specifications of purchased elements of a workcell and to technical characteristics of a workcell scene. This shows the complexity of the design-constructional process itself. The results of such approach are individually designed workcell suitable to the specific location and specific production cycle. Changing this parameters one must rebuild the whole configuration of a workcell. Taking into consideration this it is important to elaborate the base of typical elements of a robot kinematic chain that could be used as the tool for building Virtual modelling of kinematic chains of industrial robots requires several preparatory phase. Firstly, it is important to create a database element, which will be models of industrial robot arms. These models could be described as functional primitives that represent elements between components of the kinematic pairs and structural members of industrial robots. A database with following elements is created: the base kinematic pairs, the base robot structural elements, the base of the robot work scenes. The first of these databases includes kinematic pairs being the key component of the manipulator actuator modules. Accordingly, as mentioned previously, it includes the first stage rotary pair of fifth stage. This type of kinematic pairs was chosen due to the fact that it occurs most frequently in the structures of industrial robots. Second base consists of structural robot elements therefore it allows for the conversion of schematic structures of kinematic chains in the structural elements of the arm of industrial robots. It contains, inter alia, the structural elements such as base, stiff members - simple or angular units. They allow converting recorded schematic three-dimensional elements. Last database is a database of scenes. It includes elements of both simple and complex: simple models of technological equipment, conveyors models, models of the obstacles and like that. Using these elements it could be formed various production spaces (robotized workcells), in which it is possible to virtually track the operation of an industrial robot arm modelled in the system.
NASA Astrophysics Data System (ADS)
Gerhard, J.; Zanoni, M. A. B.; Torero, J. L.
2017-12-01
Smouldering (i.e., flameless combustion) underpins the technology Self-sustaining Treatment for Active Remediation (STAR). STAR achieves the in situ destruction of nonaqueous phase liquids (NAPLs) by generating a self-sustained smouldering reaction that propagates through the source zone. This research explores the nature of the travelling reaction and the influence of key in situ and engineered characteristics. A novel one-dimensional numerical model was developed (in COMSOL) to simulate the smouldering remediation of bitumen-contaminated sand. This model was validated against laboratory column experiments. Achieving model validation depended on correctly simulating the energy balance at the reaction front, including properly accounting for heat transfer, smouldering kinetics, and heat losses. Heat transfer between soil and air was demonstrated to be generally not at equilibrium. Moreover, existing heat transfer correlations were found to be inappropriate for the low air flow Reynold's numbers (Re < 30) relevant in this and similar thermal remediation systems. Therefore, a suite of experiments were conducted to generate a new heat transfer correlation, which generated correct simulations of convective heat flow through soil. Moreover, it was found that, for most cases of interest, a simple two-step pyrolysis/oxidation set of kinetic reactions was sufficient. Arrhenius parameters, calculated independently from thermogravimetric experiments, allowed the reaction kinetics to be validated in the smouldering model. Furthermore, a simple heat loss term sufficiently accounted for radial heat losses from the column. Altogether, these advances allow this simple model to reasonably predict the self-sustaining process including the peak reaction temperature, the reaction velocity, and the complete destruction of bitumen behind the front. Simulations with the validated model revealed numerous unique insights, including how the system inherently recycles energy, how air flow rate and NAPL saturation dictate contaminant destruction rates, and the extremes that lead to extinction. Overall, this research provides unique insights into the complex interplay of thermochemical processes that govern the success of smouldering as well as other thermal remediation approaches.
Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred
2015-11-01
We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
Hidden Markov models and neural networks for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
Neural networks plus hidden Markov models (HMM) can provide excellent detection and false alarm rate performance in fault detection applications, as shown in this viewgraph presentation. Modified models allow for novelty detection. Key contributions of neural network models are: (1) excellent nonparametric discrimination capability; (2) a good estimator of posterior state probabilities, even in high dimensions, and thus can be embedded within overall probabilistic model (HMM); and (3) simple to implement compared to other nonparametric models. Neural network/HMM monitoring model is currently being integrated with the new Deep Space Network (DSN) antenna controller software and will be on-line monitoring a new DSN 34-m antenna (DSS-24) by July, 1994.
NASA Astrophysics Data System (ADS)
Armstrong, Robert A.
2003-11-01
Phytoplankton species interact through competition for light and nutrients; they also interact through grazers they hold in common. Both interactions are expected to be size-dependent: smaller phytoplankton species will be at an advantage when nutrients are scarce due to surface/volume considerations, while species that are similar in size are more likely to be consumed by grazers held in common than are species that differ greatly in size. While phytoplankton competition for nutrients and light has been extensively characterized, size-based interaction through shared grazers has not been represented systematically. The latter situation is particularly unfortunate because small changes in community structure can give rise to large changes in ecosystem dynamics and, in inverse modeling, to large changes in estimated parameter values. A simple, systematic way to represent phytoplankton interaction through shared grazers, one resistant to unintended idiosyncrasy of model construction yet capable of representing scientifically justifiable idiosyncrasy, would aid greatly in the modeling process. Here I develop a model structure that allows systematic representation of plankton interaction. In this model, the zooplankton community is represented as a continuous size spectrum, while phytoplankton species can be represented individually. The mechanistic basis of the model is a shift in the zooplankton community from carnivory to omnivory to herbivory as phytoplankton density increases. I discuss two limiting approximations in some detail, and fit both to data from the IronEx II experiment. The first limiting case represents a community with no grazer-based interaction among phytoplankton species; this approximation illuminates the general structure of the model. In particular, the zooplankton spectrum can be viewed as the analog of a control rod in a nuclear reactor, which prevents (or fails to prevent) an exponential bloom of phytoplankton. A second, more complex limiting case allows more general interaction of phytoplankton species along a size axis. This latter case would be suitable for describing competition among species with distinct biogeochemical roles, or between species that cause harmful algal blooms and those that do not. The model structure as a whole is therefore simple enough to guide thinking, yet detailed enough to allow quantitative prediction.
Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R
2011-06-01
Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.; Kwan, H. W.
1999-01-01
The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
Hecht, Steven A
2006-01-01
We used the choice/no-choice methodology in two experiments to examine patterns of strategy selection and execution in groups of undergraduates. Comparisons between choice and no-choice trials revealed three groups. Some participants good retrievers) were consistently able to use retrieval to solve almost all arithmetic problems. Other participants (perfectionists) successfully used retrieval substantially less often in choice-allowed trials than when strategy choices were prohibited. Not-so-good retrievers retrieved correct answers less often than the other participants in both the choice-allowed and no-choice conditions. No group differences emerged with respect to time needed to search and access answers from long-term memory; however, not-so-good retrievers were consistently slower than the other subgroups at executing fact-retrieval processes that are peripheral to memory search and access. Theoretical models of simple arithmetic, such as the Strategy Choice and Discovery Simulation (Shrager & Siegler, 1998), should be updated to include the existence of both perfectionist and not-so-good retriever adults.
Testing stellar evolution models with detached eclipsing binaries
NASA Astrophysics Data System (ADS)
Higl, J.; Weiss, A.
2017-12-01
Stellar evolution codes, as all other numerical tools, need to be verified. One of the standard stellar objects that allow stringent tests of stellar evolution theory and models, are detached eclipsing binaries. We have used 19 such objects to test our stellar evolution code, in order to see whether standard methods and assumptions suffice to reproduce the observed global properties. In this paper we concentrate on three effects that contain a specific uncertainty: atomic diffusion as used for standard solar model calculations, overshooting from convective regions, and a simple model for the effect of stellar spots on stellar radius, which is one of the possible solutions for the radius problem of M dwarfs. We find that in general old systems need diffusion to allow for, or at least improve, an acceptable fit, and that systems with convective cores indeed need overshooting. Only one system (AI Phe) requires the absence of it for a successful fit. To match stellar radii for very low-mass stars, the spot model proved to be an effective approach, but depending on model details, requires a high percentage of the surface being covered by spots. We briefly discuss improvements needed to further reduce the freedom in modelling and to allow an even more restrictive test by using these objects.
Accidental inflation from Kähler uplifting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Dayan, Ido; Westphal, Alexander; Wieck, Clemens
2014-03-01
We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ''left-rolling'' ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ''inflation by deflation''more » (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ∼ 0.012 and enhanced tensor-to-scalar ratio r ∼ 10{sup −5}. The latter results differ significantly from many string inflation models.« less
Variational Approach in the Theory of Liquid-Crystal State
NASA Astrophysics Data System (ADS)
Gevorkyan, E. V.
2018-03-01
The variational calculus by Leonhard Euler is the basis for modern mathematics and theoretical physics. The efficiency of variational approach in statistical theory of liquid-crystal state and in general case in condensed state theory is shown. The developed approach in particular allows us to introduce correctly effective pair interactions and optimize the simple models of liquid crystals with help of realistic intermolecular potentials.
Revising Hydrology of a Land Surface Model
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya; Butler, Adrian; McIntyre, Neil; Jackson, Christopher
2015-04-01
Land Surface Models (LSMs) are key elements in guiding adaptation to the changing water cycle and the starting points to develop a global hyper-resolution model of the terrestrial water, energy and biogeochemical cycles. However, before this potential is realised, there are some fundamental limitations of LSMs related to how meaningfully hydrological fluxes and stores are represented. An important limitation is the simplistic or non-existent representation of the deep subsurface in LSMs; and another is the lack of connection of LSM parameterisations to relevant hydrological information. In this context, the paper uses a case study of the JULES (Joint UK Land Environmental Simulator) LSM applied to the Kennet region in Southern England. The paper explores the assumptions behind JULES hydrology, adapts the model structure and optimises the coupling with the ZOOMQ3D regional groundwater model. The analysis illustrates how three types of information can be used to improve the model's hydrology: a) observations, b) regionalized information, and c) information from an independent physics-based model. It is found that: 1) coupling to the groundwater model allows realistic simulation of streamflows; 2) a simple dynamic lower boundary improves upon JULES' stationary unit gradient condition; 3) a 1D vertical flow in the unsaturated zone is sufficient; however there is benefit in introducing a simple dual soil moisture retention curve; 4) regionalized information can be used to describe soil spatial heterogeneity. It is concluded that relatively simple refinements to the hydrology of JULES and its parameterisation method can provide a substantial step forward in realising its potential as a high-resolution multi-purpose model.
NASA Astrophysics Data System (ADS)
de Villiers, Jason; Jermy, Robert; Nicolls, Fred
2014-06-01
This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.
A simple shear limited, single size, time dependent flocculation model
NASA Astrophysics Data System (ADS)
Kuprenas, R.; Tran, D. A.; Strom, K.
2017-12-01
This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.
A step-by-step solution for embedding user-controlled cines into educational Web pages.
Cornfeld, Daniel
2008-03-01
The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.
Simplicity and efficiency of integrate-and-fire neuron models.
Plesser, Hans E; Diesmann, Markus
2009-02-01
Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Cluster kinetics model of particle separation in vibrated granular media.
McCoy, Benjamin J; Madras, Giridhar
2006-01-01
We model the Brazil-nut effect (BNE) by hypothesizing that granules form clusters that fragment and aggregate. This provides a heterogeneous medium in which the immersed intruder particle rises (BNE) or sinks (reverse BNE) according to relative convection currents and buoyant and drag forces. A simple relationship proposed for viscous drag in terms of the vibrational intensity and the particle to grain density ratio allows simulation of published experimental data for rise and sink times as functions of particle radius, initial depth of the particle, and particle-grain density ratio. The proposed model correctly describes the experimentally observed maximum in risetime.
A Mueller matrix model of Haidinger's brushes.
Misson, Gary P
2003-09-01
Stokes vectors and Mueller matrices are used to model the polarisation properties (birefringence, dichroism and depolarisation) of any optical system, in particular the human eye. An explanation of the form and behaviour of the entoptic phenomenon of Haidinger's brushes is derived that complements and expands upon a previous study. The relationship between the appearance of Haidinger's brushes and intrinsic ocular retardation is quantified and the model allows prediction of the effect of any retarder of any orientation placed between a source of polarised light and the eye. The simple relationship of minimum contrast of Haidinger's brushes to the cosine of total retardation is derived.
Stressed Oxidation Life Prediction for C/SiC Composites
NASA Technical Reports Server (NTRS)
Levine, Stanley R.
2004-01-01
The residual strength and life of C/SiC is dominated by carbon interface and fiber oxidation if seal coat and matrix cracks are open to allow oxygen ingress. Crack opening is determined by the combination of thermal, mechanical and thermal expansion mismatch induced stresses. When cracks are open, life can be predicted by simple oxidation based models with reaction controlled kinetics at low temperature, and by gas phase diffusion controlled kinetics at high temperatures. Key life governing variables in these models include temperature, stress, initial strength, oxygen partial pressure, and total pressure. These models are described in this paper.
On Global Optimal Sailplane Flight Strategy
NASA Technical Reports Server (NTRS)
Sander, G. J.; Litt, F. X.
1979-01-01
The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1995-12-31
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length these design procedures are proving to be inadequate. The result has either been costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two mine faces.
A simplified lumped model for the optimization of post-buckled beam architecture wideband generator
NASA Astrophysics Data System (ADS)
Liu, Weiqun; Formosa, Fabien; Badel, Adrien; Hu, Guangdi
2017-11-01
Buckled beams structures are a classical kind of bistable energy harvesters which attract more and more interests because of their capability to scavenge energy over a large frequency band in comparison with linear generator. The usual modeling approach uses the Galerkin mode discretization method with relatively high complexity, while the simplification with a single-mode solution lacks accuracy. It stems on the optimization of the energy potential features to finally define the physical and geometrical parameters. Therefore, in this paper, a simple lumped model is proposed with explicit relationship between the potential shape and parameters to allow efficient design of bistable beams based generator. The accuracy of the approximation model is studied with the effectiveness of application analyzed. Moreover, an important fact, that the bending stiffness has little influence on the potential shape with low buckling level and the sectional area determined, is found. This feature extends the applicable range of the model by utilizing the design of high moment of inertia. Numerical investigations demonstrate that the proposed model is a simple and reliable tool for design. An optimization example of using the proposed model is demonstrated with satisfactory performance.
NASA Astrophysics Data System (ADS)
Milecki, Andrzej; Pelic, Marcin
2016-10-01
This paper presents results of studies of an application of a new method of piezo bender actuators modelling. A special hysteresis simulation model was developed and is presented. The model is based on a geometrical deformation of main hysteresis loop. The piezoelectric effect is described and the history of the hysteresis modelling is briefly reviewed. Firstly, a simple model for main loop modelling is proposed. Then, a geometrical description of the non-saturated hysteresis is presented and its modelling method is introduced. The modelling makes use of the function describing the geometrical shape of the two hysteresis main curves, which can be defined theoretically or obtained by measurement. These main curves are stored in the memory and transformed geometrically in order to obtain the minor curves. Such model was prepared in the Matlab-Simulink software, but can be easily implemented using any programming language and applied in an on-line controller. In comparison to the other known simulation methods, the one presented in the paper is easy to understand, and uses simple arithmetical equations, allowing to quickly obtain the inversed model of hysteresis. The inversed model was further used for compensation of a non-saturated hysteresis of the piezo bender actuator and results have also been presented in the paper.
NASA Astrophysics Data System (ADS)
Dehotin, Judicaël; Breil, Pascal; Braud, Isabelle; de Lavenne, Alban; Lagouy, Mickaël; Sarrazin, Benoît
2015-06-01
Surface runoff is one of the hydrological processes involved in floods, pollution transfer, soil erosion and mudslide. Many models allow the simulation and the mapping of surface runoff and erosion hazards. Field observations of this hydrological process are not common although they are crucial to evaluate surface runoff models and to investigate or assess different kinds of hazards linked to this process. In this study, a simple field monitoring network is implemented to assess the relevance of a surface runoff susceptibility mapping method. The network is based on spatially distributed observations (nine different locations in the catchment) of soil water content and rainfall events. These data are analyzed to determine if surface runoff occurs. Two surface runoff mechanisms are considered: surface runoff by saturation of the soil surface horizon and surface runoff by infiltration excess (also called hortonian runoff). The monitoring strategy includes continuous records of soil surface water content and rainfall with a 5 min time step. Soil infiltration capacity time series are calculated using field soil water content and in situ measurements of soil hydraulic conductivity. Comparison of soil infiltration capacity and rainfall intensity time series allows detecting the occurrence of surface runoff by infiltration-excess. Comparison of surface soil water content with saturated water content values allows detecting the occurrence of surface runoff by saturation of the soil surface horizon. Automatic records were complemented with direct field observations of surface runoff in the experimental catchment after each significant rainfall event. The presented observation method allows the identification of fast and short-lived surface runoff processes at a small spatial and temporal resolution in natural conditions. The results also highlight the relationship between surface runoff and factors usually integrated in surface runoff mapping such as topography, rainfall parameters, soil or land cover. This study opens interesting prospects for the use of spatially distributed measurement for surface runoff detection, spatially distributed hydrological models implementation and validation at a reasonable cost.
NASA Astrophysics Data System (ADS)
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
Space debris characterization in support of a satellite breakup model
NASA Technical Reports Server (NTRS)
Fortson, Bryan H.; Winter, James E.; Allahdadi, Firooz A.
1992-01-01
The Space Kinetic Impact and Debris Branch began an ambitious program to construct a fully analytical model of the breakup of a satellite under hypervelocity impact. In order to provide empirical data with which to substantiate the model, debris from hypervelocity experiments conducted in a controlled laboratory environment were characterized to provide information of its mass, velocity, and ballistic coefficient distributions. Data on the debris were collected in one master data file, and a simple FORTRAN program allows users to describe the debris from any subset of these experiments that may be of interest to them. A statistical analysis was performed, allowing users to determine the precision of the velocity measurements for the data. Attempts are being made to include and correlate other laboratory data, as well as those data obtained from the explosion or collision of spacecraft in low earth orbit.
A logical foundation for representation of clinical data.
Campbell, K E; Das, A K; Musen, M A
1994-01-01
OBJECTIVE: A general framework for representation of clinical data that provides a declarative semantics of terms and that allows developers to define explicitly the relationships among both terms and combinations of terms. DESIGN: Use of conceptual graphs as a standard representation of logic and of an existing standardized vocabulary, the Systematized Nomenclature of Medicine (SNOMED International), for lexical elements. Concepts such as time, anatomy, and uncertainty must be modeled explicitly in a way that allows relation of these foundational concepts to surface-level clinical descriptions in a uniform manner. RESULTS: The proposed framework was used to model a simple radiology report, which included temporal references. CONCLUSION: Formal logic provides a framework for formalizing the representation of medical concepts. Actual implementations will be required to evaluate the practicality of this approach. PMID:7719805
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G
2006-01-28
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
NASA Astrophysics Data System (ADS)
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.
2006-01-01
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
Understanding hind limb lameness signs in horses using simple rigid body mechanics.
Starke, S D; May, S A; Pfau, T
2015-09-18
Hind limb lameness detection in horses relies on the identification of movement asymmetry which can be based on multiple pelvic landmarks. This study explains the poorly understood relationship between hind limb lameness pointers, related to the tubera coxae and sacrum, based on experimental data in context of a simple rigid body model. Vertical displacement of tubera coxae and sacrum was quantified experimentally in 107 horses with varying lameness degrees. A geometrical rigid-body model of pelvis movement during lameness was created in Matlab. Several asymmetry measures were calculated and contrasted. Results showed that model predictions for tubera coxae asymmetry during lameness matched experimental observations closely. Asymmetry for sacrum and comparative tubera coxae movement showed a strong association both empirically (R(2)≥ 0.92) and theoretically. We did not find empirical or theoretical evidence for a systematic, pronounced adaptation in the pelvic rotation pattern with increasing lameness. The model showed that the overall range of movement between tubera coxae does not allow the appreciation of asymmetry changes beyond mild lameness. When evaluating movement relative to the stride cycle we did find empirical evidence for asymmetry being slightly more visible when comparing tubera coxae amplitudes rather than sacrum amplitudes, although variation exists for mild lameness. In conclusion, the rigidity of the equine pelvis results in tightly linked movement trajectories of different pelvic landmarks. The model allows the explanation of empirical observations in the context of the underlying mechanics, helping the identification of potentially limited assessment choices when evaluating gait. Copyright © 2015 Elsevier Ltd. All rights reserved.
[A simple and efficient method for establishing a mouse model of orthotopic MB49 bladder cancer].
Liang, Zhong-kun; Zhang, Lin; Hu, Zhi-ming; Chen, Zhong; Huang, Xin; Shi, Xiang-hua; Tan, Wan-long; Gao, Ji-min
2009-04-01
To establish a simple and efficient method for establishing a mouse model of orthotopic superficial bladder cancer. C57BL/6 mice were anesthetized with sodium pentobarbital and catheterized with modified IV catheter (24 G). The mice were intravesically pretreated with HCl and then with NaOH, and after washing the bladders with phosphate-buffered saline (PBS), 100 microl (1 x 10(7)) MB49 cells were infused and allowed to incubate in the bladder for 2 h followed intravesical mitomycin C (MMC) administration. The tumor formation rate, survival, gross hematuria, and bladder weight were determined as the outcome variables, and the pathology of the bladders was observed. Instillation of MB49 tumor cells resulted in a tumor formation rates of 100% in all the pretreated groups while 0% in the control group without pretreatment. MMC significantly reduced the bladder weight as compared to PBS. We have successfully established a stable, reproducible, and reliable orthotopic bladder cancer model in mice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavignet, A.A.; Wick, C.J.
In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
NASA Astrophysics Data System (ADS)
Durand-Smet, P.; Gauquelin, E.; Chastrette, N.; Boudaoud, A.; Asnacios, A.
2017-10-01
While plant growth is well known to rely on turgor pressure, it is challenging to quantify the contribution of turgor pressure to plant cell rheology. Here we used a custom-made micro-rheometer to quantify the viscoelastic behavior of isolated plant cells while varying their internal turgor pressure. To get insight into how plant cells adapt their internal pressure to the osmolarity of their medium, we compared the mechanical behavior of single plant cells to that of a simple, passive, pressurized shell: a soccer ball. While both systems exhibited the same qualitative behavior, a simple mechanical model allowed us to quantify turgor pressure regulation at the single cell scale.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
Rodriguez-Saona, L E; Koca, N; Harper, W J; Alvarez, V B
2006-05-01
There is a need for rapid and simple techniques that can be used to predict the quality of cheese. The aim of this research was to develop a simple and rapid screening tool for monitoring Swiss cheese composition by using Fourier transform infrared spectroscopy. Twenty Swiss cheese samples from different manufacturers and degree of maturity were evaluated. Direct measurements of Swiss cheese slices (approximately 0.5 g) were made using a MIRacle 3-reflection diamond attenuated total reflectance (ATR) accessory. Reference methods for moisture (vacuum oven), protein content (Kjeldahl), and fat (Babcock) were used. Calibration models were developed based on a cross-validated (leave-one-out approach) partial least squares regression. The information-rich infrared spectral range for Swiss cheese samples was from 3,000 to 2,800 cm(-1) and 1,800 to 900 cm(-1). The performance statistics for cross-validated models gave estimates for standard error of cross-validation of 0.45, 0.25, and 0.21% for moisture, protein, and fat respectively, and correlation coefficients r > 0.96. Furthermore, the ATR infrared protocol allowed for the classification of cheeses according to manufacturer and aging based on unique spectral information, especially of carbonyl groups, probably due to their distinctive lipid composition. Attenuated total reflectance infrared spectroscopy allowed for the rapid (approximately 3-min analysis time) and accurate analysis of the composition of Swiss cheese. This technique could contribute to the development of simple and rapid protocols for monitoring complex biochemical changes, and predicting the final quality of the cheese.
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A.
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape – liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant–microbe interaction with their potential outreach into crop breeding. PMID:25870605
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape - liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant-microbe interaction with their potential outreach into crop breeding.
Harpold, Adrian A.; Burns, Douglas A.; Walter, M.T.; Steenhuis, Tammo S.
2013-01-01
Describing the distribution of aquatic habitats and the health of biological communities can be costly and time-consuming; therefore, simple, inexpensive methods to scale observations of aquatic biota to watersheds that lack data would be useful. In this study, we explored the potential of a simple “hydrogeomorphic” model to predict the effects of acid deposition on macroinvertebrate, fish, and diatom communities in 28 sub-watersheds of the 176-km2 Neversink River basin in the Catskill Mountains of New York State. The empirical model was originally developed to predict stream-water acid neutralizing capacity (ANC) using the watershed slope and drainage density. Because ANC is known to be strongly related to aquatic biological communities in the Neversink, we speculated that the model might correlate well with biotic indicators of ANC response. The hydrogeomorphic model was strongly correlated to several measures of macroinvertebrate and fish community richness and density, but less strongly correlated to diatom acid tolerance. The model was also strongly correlated to biological communities in 18 sub-watersheds independent of the model development, with the linear correlation capturing the strongly acidic nature of small upland watersheds (2). Overall, we demonstrated the applicability of geospatial data sets and a simple hydrogeomorphic model for estimating aquatic biological communities in areas with stream-water acidification, allowing estimates where no direct field observations are available. Similar modeling approaches have the potential to complement or refine expensive and time-consuming measurements of aquatic biota populations and to aid in regional assessments of aquatic health.
A simple method for long-term biliary access in large animals.
Andrews, J C; Knutsen, C; Smith, P; Prieskorn, D; Crudip, J; Klevering, J; Ensminger, W D
1988-07-01
A simple method to obtain long-term access to the biliary tree in dogs and pigs is presented. In ten dogs and four pigs, a cholecystectomy was performed, the cystic duct isolated, and a catheter inserted into the cut end of the cystic duct. The catheter was connected to a subcutaneous infusion port, producing a closed, internal system to allow long-term access. The catheter placement was successful in three of the pigs and all of the dogs. Thirty-five cholangiograms were obtained in the 13 subjects by accessing the port with a 20 gauge Huber needle and injecting small amounts (4-10 mL) of contrast under fluoroscopic control. Cholangiograms were obtained up to four months after catheter placement without evidence for catheter failure or surgically induced changes in the biliary tree. This model provides a simple, reliable means to obtain serial cholangiograms in a research setting.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
Learning to represent spatial transformations with factored higher-order Boltzmann machines.
Memisevic, Roland; Hinton, Geoffrey E
2010-06-01
To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.
Boulle, A; Conchon, F; Guinebretière, R
2006-01-01
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
NASA Astrophysics Data System (ADS)
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Sensory Perception and Aging in Model Systems: From the Outside In
Linford, Nancy J.; Kuo, Tsung-Han; Chan, Tammy P.; Pletcher, Scott D.
2014-01-01
Sensory systems provide organisms from bacteria to human with the ability to interact with the world. Numerous senses have evolved that allow animals to detect and decode cues from sources in both their external and internal environments. Recent advances in understanding the central mechanisms by which the brains of simple organisms evaluate different cues and initiate behavioral decisions, coupled with observations that sensory manipulations are capable of altering organism lifespan, have opened the door for powerful new research into aging. While direct links between sensory perception and aging have been established only recently, here we discuss these initial discoveries and evaluate the potential for different forms of sensory processing to modulate lifespan across taxa. Harnessing the neurobiology of simple model systems to study the biological impact of sensory experiences will yield insights into the broad influence of sensory perception in mammals and may help uncover new mechanisms of healthy aging. PMID:21756108
Sensory perception and aging in model systems: from the outside in.
Linford, Nancy J; Kuo, Tsung-Han; Chan, Tammy P; Pletcher, Scott D
2011-01-01
Sensory systems provide organisms from bacteria to humans with the ability to interact with the world. Numerous senses have evolved that allow animals to detect and decode cues from sources in both their external and internal environments. Recent advances in understanding the central mechanisms by which the brains of simple organisms evaluate different cues and initiate behavioral decisions, coupled with observations that sensory manipulations are capable of altering organismal lifespan, have opened the door for powerful new research into aging. Although direct links between sensory perception and aging have been established only recently, here we discuss these initial discoveries and evaluate the potential for different forms of sensory processing to modulate lifespan across taxa. Harnessing the neurobiology of simple model systems to study the biological impact of sensory experiences will yield insights into the broad influence of sensory perception in mammals and may help uncover new mechanisms of healthy aging.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.
Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B
2012-06-01
This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.
Using simple agent-based modeling to inform and enhance neighborhood walkability
2013-01-01
Background Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory ‘what-if’ scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. Methods This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. Results The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. Conclusions The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) ‘learning’ and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume). PMID:24330721
Virumbrales-Muñoz, María; Ayuso, José María; Olave, Marta; Monge, Rosa; de Miguel, Diego; Martínez-Lostao, Luis; Le Gac, Séverine; Doblare, Manuel; Ochoa, Ignacio; Fernandez, Luis J
2017-09-20
The tumour microenvironment is very complex, and essential in tumour development and drug resistance. The endothelium is critical in the tumour microenvironment: it provides nutrients and oxygen to the tumour and is essential for systemic drug delivery. Therefore, we report a simple, user-friendly microfluidic device for co-culture of a 3D breast tumour model and a 2D endothelium model for cross-talk and drug delivery studies. First, we demonstrated the endothelium was functional, whereas the tumour model exhibited in vivo features, e.g., oxygen gradients and preferential proliferation of cells with better access to nutrients and oxygen. Next, we observed the endothelium structure lost its integrity in the co-culture. Following this, we evaluated two drug formulations of TRAIL (TNF-related apoptosis inducing ligand): soluble and anchored to a LUV (large unilamellar vesicle). Both diffused through the endothelium, LUV-TRAIL being more efficient in killing tumour cells, showing no effect on the integrity of endothelium. Overall, we have developed a simple capillary force-based microfluidic device for 2D and 3D cell co-cultures. Our device allows high-throughput approaches, patterning different cell types and generating gradients without specialised equipment. We anticipate this microfluidic device will facilitate drug screening in a relevant microenvironment thanks to its simple, effective and user-friendly operation.
A simple integrated assessment approach to global change simulation and evaluation
NASA Astrophysics Data System (ADS)
Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael
2016-04-01
We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.
Analyst-centered models for systems design, analysis, and development
NASA Technical Reports Server (NTRS)
Bukley, A. P.; Pritchard, Richard H.; Burke, Steven M.; Kiss, P. A.
1988-01-01
Much has been written about the possible use of Expert Systems (ES) technology for strategic defense system applications, particularly for battle management algorithms and mission planning. It is proposed that ES (or more accurately, Knowledge Based System (KBS)) technology can be used in situations for which no human expert exists, namely to create design and analysis environments that allow an analyst to rapidly pose many different possible problem resolutions in game like fashion and to then work through the solution space in search of the optimal solution. Portions of such an environment exist for expensive AI hardware/software combinations such as the Xerox LOOPS and Intellicorp KEE systems. Efforts are discussed to build an analyst centered model (ACM) using an ES programming environment, ExperOPS5 for a simple missile system tradeoff study. By analyst centered, it is meant that the focus of learning is for the benefit of the analyst, not the model. The model's environment allows the analyst to pose a variety of what if questions without resorting to programming changes. Although not an ES per se, the ACM would allow for a design and analysis environment that is much superior to that of current technologies.
Mathematical modelling of risk reduction in reinsurance
NASA Astrophysics Data System (ADS)
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Multi-hole pressure probes to wind tunnel experiments and air data systems
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Shmakov, A. S.
2017-10-01
The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.
NASA Astrophysics Data System (ADS)
Kočí, Jan; Maděra, Jiří; Kočí, Václav; Hlaváčová, Zuzana; Černý, Robert
2017-11-01
A simple laboratory experiment for the determination of thermal response of a studied sample during thawing is described in the paper. The sample made of autoclaved aerated concrete was partially water saturated and frozen. Then, the temperature development during thawing was recorded, allowing to identify the time scale of the phase change process taking place inside the sample. The experimental data was then used in the inverse analysis, in order to find unknown parameters of the smoothed effective specific heat capacity model.
Relaxational effects in radiating stellar collapse
NASA Astrophysics Data System (ADS)
Govender, Megan; Maartens, Roy; Maharaj, Sunil D.
1999-12-01
Relaxational effects in stellar heat transport can in many cases be significant. Relativistic Fourier-Eckart theory is inherently quasi-stationary, and cannot incorporate these effects. The effects are naturally accounted for in causal relativistic thermodynamics, which provides an improved approximation to kinetic theory. Recent results, based on perturbations of a static star, show that relaxation effects can produce a significant increase in the central temperature and temperature gradient for a given luminosity. We use a simple stellar model that allows for non-perturbative deviations from staticity, and confirms qualitatively the predictions of the perturbative models.
Turbulent shear layers in confining channels
NASA Astrophysics Data System (ADS)
Benham, Graham P.; Castrejon-Pita, Alfonso A.; Hewitt, Ian J.; Please, Colin P.; Style, Rob W.; Bird, Paul A. D.
2018-06-01
We present a simple model for the development of shear layers between parallel flows in confining channels. Such flows are important across a wide range of topics from diffusers, nozzles and ducts to urban air flow and geophysical fluid dynamics. The model approximates the flow in the shear layer as a linear profile separating uniform-velocity streams. Both the channel geometry and wall drag affect the development of the flow. The model shows good agreement with both particle image velocimetry experiments and computational turbulence modelling. The simplicity and low computational cost of the model allows it to be used for benchmark predictions and design purposes, which we demonstrate by investigating optimal pressure recovery in diffusers with non-uniform inflow.
Observation Data Model Core Components, its Implementation in the Table Access Protocol Version 1.1
NASA Astrophysics Data System (ADS)
Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel; Michel, Laurent; Bonnarel, Francos; Micol, Alberto; IVOA DataModel Working Group; Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel
2017-05-01
This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for astronomical observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM). ObsCore shares a large set of common concepts with DataSet Metadata Data Model (Cresitello-Dittmar et al. 2016) which binds together most of the data model concepts from the above models in a comprehensive and more general frame work. This current specification on the contrary provides guidelines for implementing these concepts using the TAP protocol and answering ADQL queries. It is dedicated to global discovery.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.
A Dynamic Approach to Monitoring Particle Fallout in a Cleanroom Environment
NASA Technical Reports Server (NTRS)
Perry, Radford L., III
2010-01-01
This slide presentation discusses a mathematical model to monitor particle fallout in a cleanroom. "Cleanliness levels" do not lead to increases with regards to cleanroom type or time because the levels are not linear. Activity level, impacts the cleanroom class. The numerical method presented leads to a simple Class-hour formulation, that allows for dynamic monitoring of the particle using a standard air particle counter.
Using Bayesian Networks and Decision Theory to Model Physical Security
2003-02-01
Home automation technologies allow a person to monitor and control various activities within a home or office setting. Cameras, sensors and other...components used along with the simple rules in the home automation software provide an environment where the lights, security and other appliances can be...monitored and controlled. These home automation technologies, however, lack the power to reason under uncertain conditions and thus the system can
Solar Thermal Propulsion for Microsatellite Manoeuvring
2004-09-01
of 14-cm and 56-cm diameter solar concentrating mirrors has clearly validated initial optical ray trace modelling and suggests that there is...concentrating mirror’s focus, permitting multiple mirror inputs to heat a single receiver and allowing the receiver to be placed anywhere on the host...The STE is conceptually simple, relying on a mirror or lens assembly to collect and concentrate incident solar radiation. This energy is focused, by
NASA Astrophysics Data System (ADS)
Tufillaro, Nicholas B.; Abbott, Tyler A.; Griffiths, David J.
1984-10-01
We examine the motion of an Atwood's Machine in which one of the masses is allowed to swing in a plane. Computer studies reveal a rich variety of trajectories. The orbits are classified (bounded, periodic, singular, and terminating), and formulas for the critical mass ratios are developed. Perturbative techniques yield good approximations to the computer-generated trajectories. The model constitutes a simple example of a nonlinear dynamical system with two degrees of freedom.
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-01-01
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-04-15
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.
Made-to-measure modelling of observed galaxy dynamics
NASA Astrophysics Data System (ADS)
Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.
2018-01-01
Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.
NASA Astrophysics Data System (ADS)
Perez, R. J.; Shevalier, M.; Hutcheon, I.
2004-05-01
Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.
Development of inexpensive prosthetic feet for high-heeled shoes using simple shoe insole model.
Meier, Margrit R; Tucker, Kerice A; Hansen, Andrew H
2014-01-01
The large majority of prosthetic feet are aimed at low-heeled shoes, with a few models allowing a heel height of up to 5 cm. However, a survey by the American Podiatric Medical Association indicates that most women wear heels over 5 cm; thus, current prosthetic feet limit most female prosthesis users in their choice. Some prosthetic foot components are heel-height adjustable; however, their plantar surface shapes do not change to match the insole shapes of the shoes with different heel heights. The aims of the study were therefore (1) to develop a model that allows prediction of insole shape for various heel height shoes in combination with different shoe sizes and (2) to develop and field-test low-cost prototypes of prosthetic feet whose insole shapes were based on the new model. An equation was developed to calculate insole shapes independent of shoe size. Field testing of prototype prosthetic feet fabricated based on the equation was successful and demonstrated the utility of the equation.
Truong, Dennis Q; Hüber, Mathias; Xie, Xihe; Datta, Abhishek; Rahman, Asif; Parra, Lucas C; Dmochowski, Jacek P; Bikson, Marom
2014-01-01
Computational models of brain current flow during transcranial electrical stimulation (tES), including transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), are increasingly used to understand and optimize clinical trials. We propose that broad dissemination requires a simple graphical user interface (GUI) software that allows users to explore and design montages in real-time, based on their own clinical/experimental experience and objectives. We introduce two complimentary open-source platforms for this purpose: BONSAI and SPHERES. BONSAI is a web (cloud) based application (available at neuralengr.com/bonsai) that can be accessed through any flash-supported browser interface. SPHERES (available at neuralengr.com/spheres) is a stand-alone GUI application that allow consideration of arbitrary montages on a concentric sphere model by leveraging an analytical solution. These open-source tES modeling platforms are designed go be upgraded and enhanced. Trade-offs between open-access approaches that balance ease of access, speed, and flexibility are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
[Is there life beyond SPSS? Discover R].
Elosua Oliden, Paula
2009-11-01
R is a GNU statistical and programming environment with very high graphical capabilities. It is very powerful for research purposes, but it is also an exceptional tool for teaching. R is composed of more than 1400 packages that allow using it for simple statistics and applying the most complex and most recent formal models. Using graphical interfaces like the Rcommander package, permits working in user-friendly environments which are similar to the graphical environment used by SPSS. This last characteristic allows non-statisticians to overcome the obstacle of accessibility, and it makes R the best tool for teaching. Is there anything better? Open, free, affordable, accessible and always on the cutting edge.
Aoi, Shinya; Nachstedt, Timo; Manoonpong, Poramate; Wörgötter, Florentin; Matsuno, Fumitoshi
2018-01-01
Insects have various gaits with specific characteristics and can change their gaits smoothly in accordance with their speed. These gaits emerge from the embodied sensorimotor interactions that occur between the insect’s neural control and body dynamic systems through sensory feedback. Sensory feedback plays a critical role in coordinated movements such as locomotion, particularly in stick insects. While many previously developed insect models can generate different insect gaits, the functional role of embodied sensorimotor interactions in the interlimb coordination of insects remains unclear because of their complexity. In this study, we propose a simple physical model that is amenable to mathematical analysis to explain the functional role of these interactions clearly. We focus on a foot contact sensory feedback called phase resetting, which regulates leg retraction timing based on touchdown information. First, we used a hexapod robot to determine whether the distributed decoupled oscillators used for legs with the sensory feedback generate insect-like gaits through embodied sensorimotor interactions. The robot generated two different gaits and one had similar characteristics to insect gaits. Next, we proposed the simple model as a minimal model that allowed us to analyze and explain the gait mechanism through the embodied sensorimotor interactions. The simple model consists of a rigid body with massless springs acting as legs, where the legs are controlled using oscillator phases with phase resetting, and the governed equations are reduced such that they can be explained using only the oscillator phases with some approximations. This simplicity leads to analytical solutions for the hexapod gaits via perturbation analysis, despite the complexity of the embodied sensorimotor interactions. This is the first study to provide an analytical model for insect gaits under these interaction conditions. Our results clarified how this specific foot contact sensory feedback contributes to generation of insect-like ipsilateral interlimb coordination during hexapod locomotion. PMID:29489831
Gas network model allows full reservoir coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Methnani, M.M.
The gas-network flow model (Gasnet) developed for and added to an existing Qatar General Petroleum Corp. (OGPC) in-house reservoir simulator, allows improved modeling of the interaction among the reservoir, wells, and pipeline networks. Gasnet is a three-phase model that is modified to handle gas-condensate systems. The numerical solution is based on a control volume scheme that uses the concept of cells and junctions, whereby pressure and phase densities are defined in cells, while phase flows are defined at junction links. The model features common numerical equations for the reservoir, the well, and the pipeline components and an efficient state-variable solutionmore » method in which all primary variables including phase flows are solved directly. Both steady-state and transient flow events can be simulated with the same tool. Three test cases show how the model runs. One case simulates flow redistribution in a simple two-branch gas network. The second simulates a horizontal gas well in a waterflooded gas reservoir. The third involves an export gas pipeline coupled to a producing reservoir.« less
Introduction to the thermodynamic Bethe ansatz
NASA Astrophysics Data System (ADS)
van Tongeren, Stijn J.
2016-08-01
We give a pedagogical introduction to the thermodynamic Bethe ansatz, a method that allows us to describe the thermodynamics of integrable models whose spectrum is found via the (asymptotic) Bethe ansatz. We set the stage by deriving the Fermi-Dirac distribution and associated free energy of free electrons, and then in a similar though technically more complicated fashion treat the thermodynamics of integrable models, focusing first on the one-dimensional Bose gas with delta function interaction as a clean pedagogical example, secondly the XXX spin chain as an elementary (lattice) model with prototypical complicating features in the form of bound states, and finally the {SU}(2) chiral Gross-Neveu model as a field theory example. Throughout this discussion we emphasize the central role of particle and hole densities, whose relations determine the model under consideration. We then discuss tricks that allow us to use the same methods to describe the exact spectra of integrable field theories on a circle, in particular the chiral Gross-Neveu model. We moreover discuss the simplification of TBA equations to Y systems, including the transition back to integral equations given sufficient analyticity data, in simple examples.
Quantum integrability and functional equations
NASA Astrophysics Data System (ADS)
Volin, Dmytro
2010-03-01
In this thesis a general procedure to represent the integral Bethe Ansatz equations in the form of the Reimann-Hilbert problem is given. This allows us to study in simple way integrable spin chains in the thermodynamic limit. Based on the functional equations we give the procedure that allows finding the subleading orders in the solution of various integral equations solved to the leading order by the Wiener-Hopf technics. The integral equations are studied in the context of the AdS/CFT correspondence, where their solution allows verification of the integrability conjecture up to two loops of the strong coupling expansion. In the context of the two-dimensional sigma models we analyze the large-order behavior of the asymptotic perturbative expansion. Obtained experience with the functional representation of the integral equations allowed us also to solve explicitly the crossing equations that appear in the AdS/CFT spectral problem.
Spectrum-doubled heavy vector bosons at the LHC
Appelquist, Thomas; Bai, Yang; Ingoldby, James; ...
2016-01-19
We study a simple effective field theory incorporating six heavy vector bosons together with the standard-model field content. The new particles preserve custodial symmetry as well as an approximate left-right parity symmetry. The enhanced symmetry of the model allows it to satisfy precision electroweak constraints and bounds from Higgs physics in a regime where all the couplings are perturbative and where the amount of fine-tuning is comparable to that in the standard model itself. We find that the model could explain the recently observed excesses in di-boson processes at invariant mass close to 2TeV from LHC Run 1 for amore » range of allowed parameter space. The masses of all the particles differ by no more than roughly 10%. In a portion of the allowed parameter space only one of the new particles has a production cross section large enough to be detectable with the energy and luminosity of Run 1, both via its decay to WZ and to Wh, while the others have suppressed production rates. Furthermore, the model can be tested at the higher-energy and higher-luminosity run of the LHC even for an overall scale of the new particles higher than 3TeV.« less
Predicting tidal currents in San Francisco Bay using a spectral model
Burau, Jon R.; Cheng, Ralph T.
1988-01-01
This paper describes the formulation of a spectral (or frequency based) model which solves the linearized shallow water equations. To account for highly variable basin bathymetry, spectral solutions are obtained using the finite element method which allows the strategic placement of the computation points in the specific areas of interest or in areas where the gradients of the dependent variables are expected to be large. Model results are compared with data using simple statistics to judge overall model performance in the San Francisco Bay estuary. Once the model is calibrated and verified, prediction of the tides and tidal currents in San Francisco Bay is accomplished by applying astronomical tides (harmonic constants deduced from field data) at the prediction time along the model boundaries.
Consistent calculation of the screening and exchange effects in allowed β- transitions
NASA Astrophysics Data System (ADS)
Mougeot, X.; Bisch, C.
2014-07-01
The atomic exchange effect has previously been demonstrated to have a great influence at low energy on the Pu241 β- transition. The screening effect has been given as a possible explanation for a remaining discrepancy. Improved calculations have been made to consistently evaluate these two atomic effects, compared here to the recent high-precision measurements of Pu241 and Ni63 β spectra. In this paper a screening correction has been defined to account for the spatial extension of the electron wave functions. Excellent overall agreement of about 1% from 500 eV to the end-point energy has been obtained for both β spectra, which demonstrates that a rather simple β decay model for allowed transitions, including atomic effects within an independent-particle model, is sufficient to describe well the current most precise measurements.
NASA Astrophysics Data System (ADS)
Yoshida, Mari; Reyes, Sabrina Galiñanes; Tsuda, Soichiro; Horinouchi, Takaaki; Furusawa, Chikara; Cronin, Leroy
2017-06-01
Multi-drug strategies have been attempted to prolong the efficacy of existing antibiotics, but with limited success. Here we show that the evolution of multi-drug-resistant Escherichia coli can be manipulated in vitro by administering pairs of antibiotics and switching between them in ON/OFF manner. Using a multiplexed cell culture system, we find that switching between certain combinations of antibiotics completely suppresses the development of resistance to one of the antibiotics. Using this data, we develop a simple deterministic model, which allows us to predict the fate of multi-drug evolution in this system. Furthermore, we are able to reverse established drug resistance based on the model prediction by modulating antibiotic selection stresses. Our results support the idea that the development of antibiotic resistance may be potentially controlled via continuous switching of drugs.
Proposal for an integrated evaluation model for the study of whole systems health care in cancer.
Jonas, Wayne B; Beckner, William; Coulter, Ian
2006-12-01
For more than 200 years, biomedicine has approached the treatment of disease by studying disease processes (patho-genesis), inferring causal connections and developing specific approaches for therapeutically interfering with those processes. This pathogenic approach has been highly successful in acute and traumatic disease but less successful in chronic disease, primarily because of the complex, multi-factorial nature of most chronic disease, which does not allow for simple causal inference or for simple therapeutic interventions. This article suggests that chronic disease is best approached by enhancing healing processes (salutogenesis) as a whole system. Because of the nature of complex systems in chronic disease, an evaluation model based on integrative medicine is felt to be more appropriate than a disease model. The authors propose and describe an integrated model for the evaluation of healing (IMEH) that collects multilevel "thick case" observational data in assessing complex practices for chronic disease. If successful, this approach could become a blueprint for studying healing capacity in whole medical systems, including complementary medicine, traditional medicine, and conventional primary care. In addition, streamlining data collection and applying rapid informatics management might allow for such data to be used in guiding clinical practice. The IMEH involves collection, integration, and potentially feedback of relevant variables in the following areas: (1) sociocultural, (2) psychological and behavioral, (3) clinical (diagnosis based), and (4) biological. Evaluation and integration of these components would involve specialized research teams that feed their data into a single data management and information analysis center. These data can then be subjected to descriptive and pathway analysis providing "bench and bedside" information.
THE RESPONSE OF DRUG EXPENDITURE TO NON-LINEAR CONTRACT DESIGN: EVIDENCE FROM MEDICARE PART D*
Einav, Liran; Finkelstein, Amy; Schrimpf, Paul
2016-01-01
We study the demand response to non-linear price schedules using data on insurance contracts and prescription drug purchases in Medicare Part D. We exploit the kink in individuals’ budget set created by the famous “donut hole,” where insurance becomes discontinuously much less generous on the margin, to provide descriptive evidence of the drug purchase response to a price increase. We then specify and estimate a simple dynamic model of drug use that allows us to quantify the spending response along the entire non-linear budget set. We use the model for counterfactual analysis of the increase in spending from “filling” the donut hole, as will be required by 2020 under the Affordable Care Act. In our baseline model, which considers spending decisions within a single year, we estimate that “filling” the donut hole will increase annual drug spending by about $150, or about 8 percent. About one-quarter of this spending increase reflects “anticipatory” behavior, coming from beneficiaries whose spending prior to the policy change would leave them short of reaching the donut hole. We also present descriptive evidence of cross-year substitution of spending by individuals who reach the kink, which motivates a simple extension to our baseline model that allows – in a highly stylized way – for individuals to engage in such cross year substitution. Our estimates from this extension suggest that a large share of the $150 drug spending increase could be attributed to cross-year substitution, and the net increase could be as little as $45 per year. PMID:26769984
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1997-01-01
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length, these design procedures are proving to be inadequate. The result has either been a costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC, this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two minemore » faces.« less
Consistency of the free-volume approach to the homogeneous deformation of metallic glasses
NASA Astrophysics Data System (ADS)
Blétry, Marc; Thai, Minh Thanh; Champion, Yannick; Perrière, Loïc; Ochin, Patrick
2014-05-01
One of the most widely used approaches to model metallic-glasses high-temperature homogeneous deformation is the free-volume theory, developed by Cohen and Turnbull and extended by Spaepen. A simple elastoviscoplastic formulation has been proposed that allows one to determine various parameters of such a model. This approach is applied here to the results obtained by de Hey et al. on a Pd-based metallic glass. In their study, de Hey et al. were able to determine some of the parameters used in the elastoviscoplastic formulation through DSC modeling coupled with mechanical tests, and the consistency of the two viewpoints was assessed.
Mager, P P; Rothe, H
1990-10-01
Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Randomized shortest-path problems: two related models.
Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh
2009-08-01
This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.
Nonlinear multiplicative dendritic integration in neuron and network models
Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si
2013-01-01
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543
Numerical model for the thermal behavior of thermocline storage tanks
NASA Astrophysics Data System (ADS)
Ehtiwesh, Ismael A. S.; Sousa, Antonio C. M.
2018-03-01
Energy storage is a critical factor in the advancement of solar thermal power systems for the sustained delivery of electricity. In addition, the incorporation of thermal energy storage into the operation of concentrated solar power systems (CSPs) offers the potential of delivering electricity without fossil-fuel backup even during peak demand, independent of weather conditions and daylight. Despite this potential, some areas of the design and performance of thermocline systems still require further attention for future incorporation in commercial CSPs, particularly, their operation and control. Therefore, the present study aims to develop a simple but efficient numerical model to allow the comprehensive analysis of thermocline storage systems aiming better understanding of their dynamic temperature response. The validation results, despite the simplifying assumptions of the numerical model, agree well with the experiments for the time evolution of the thermocline region. Three different cases are considered to test the versatility of the numerical model; for the particular type of a storage tank with top round impingement inlet, a simple analytical model was developed to take into consideration the increased turbulence level in the mixing region. The numerical predictions for the three cases are in general good agreement against the experimental results.
NASA Astrophysics Data System (ADS)
Osman, Yassin Z.; Bruen, Michael P.
2002-07-01
Seepage from a stream, which partially penetrates an unconfined alluvial aquifer, is studied for the case when the water table falls below the streambed level. Inadequacies are identified in current modelling approaches to this situation. A simple and improved method of incorporating such seepage into groundwater models is presented. This considers the effect on seepage flow of suction in the unsaturated part of the aquifer below a disconnected stream and allows for the variation of seepage with water table fluctuations. The suggested technique is incorporated into the saturated code MODFLOW and is tested by comparing its predictions with those of a widely used variably saturated model, SWMS_2D simulating water flow and solute transport in two-dimensional variably saturated media. Comparisons are made of both seepage flows and local mounding of the water table. The suggested technique compares very well with the results of variably saturated model simulations. Most currently used approaches are shown to underestimate the seepage and associated local water table mounding, sometimes substantially. The proposed method is simple, easy to implement and requires only a small amount of additional data about the aquifer hydraulic properties.
Scalar-tensor extension of the ΛCDM model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algoner, W.C.; Velten, H.E.S.; Zimdahl, W., E-mail: w.algoner@cosmo-ufes.org, E-mail: velten@pq.cnpq.br, E-mail: winfried.zimdahl@pq.cnpq.br
2016-11-01
We construct a cosmological scalar-tensor-theory model in which the Brans-Dicke type scalar Φ enters the effective (Jordan-frame) Hubble rate as a simple modification of the Hubble rate of the ΛCDM model. This allows us to quantify differences between the background dynamics of scalar-tensor theories and general relativity (GR) in a transparent and observationally testable manner in terms of one single parameter. Problems of the mapping of the scalar-field degrees of freedom on an effective fluid description in a GR context are discused. Data from supernovae, the differential age of old galaxies and baryon acoustic oscillations are shown to strongly limitmore » potential deviations from the standard model.« less
Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya
2014-01-01
A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
ECOLOGICAL THEORY. A general consumer-resource population model.
Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M
2015-08-21
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.
Kurosaki, Masayuki; Hiramatsu, Naoki; Sakamoto, Minoru; Suzuki, Yoshiyuki; Iwasaki, Manabu; Tamori, Akihiro; Matsuura, Kentaro; Kakinuma, Sei; Sugauchi, Fuminaka; Sakamoto, Naoya; Nakagawa, Mina; Izumi, Namiki
2012-03-01
Assessment of the risk of hepatocellular carcinoma (HCC) development is essential for formulating personalized surveillance or antiviral treatment plan for chronic hepatitis C. We aimed to build a simple model for the identification of patients at high risk of developing HCC. Chronic hepatitis C patients followed for at least 5 years (n=1003) were analyzed by data mining to build a predictive model for HCC development. The model was externally validated using a cohort of 1072 patients (472 with sustained virological response (SVR) and 600 with nonSVR to PEG-interferon plus ribavirin therapy). On the basis of factors such as age, platelet, albumin, and aspartate aminotransferase, the HCC risk prediction model identified subgroups with high-, intermediate-, and low-risk of HCC with a 5-year HCC development rate of 20.9%, 6.3-7.3%, and 0-1.5%, respectively. The reproducibility of the model was confirmed through external validation (r(2)=0.981). The 10-year HCC development rate was also significantly higher in the high-and intermediate-risk group than in the low-risk group (24.5% vs. 4.8%; p<0.0001). In the high-and intermediate-risk group, the incidence of HCC development was significantly reduced in patients with SVR compared to those with nonSVR (5-year rate, 9.5% vs. 4.5%; p=0.040). The HCC risk prediction model uses simple and readily available factors and identifies patients at a high risk of HCC development. The model allows physicians to identify patients requiring HCC surveillance and those who benefit from IFN therapy to prevent HCC. Copyright © 2011 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Hanson, A A; Moon, R D; Wright, R J; Hunt, T E; Hutchison, W D
2015-08-01
Western bean cutworm, Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native, univoltine pest of corn and dry beans in North America. The current degree-day model for predicting a specified percentage of yearly moth flight involves heat unit accumulation above 10°C after 1 May. However, because the moth's observed range has expanded into the northern and eastern United States, there is concern that suitable temperatures before May could allow for significant S. albicosta development. Daily blacklight moth catch and temperature data from four Nebraska locations were used to construct degree-day models using simple or sine-wave methods, starting dates between 1 January and 1 May, and lower (-5 to 15°C) and upper (20 to 43.3°C) developmental thresholds. Predicted dates of flight from these models were compared with observed flight dates using independent datasets to assess model performance. Model performance was assessed with the concordance correlation coefficient to concurrently evaluate precision and accuracy. The best model for predicting timing of S. albicosta flight used simple degree-day calculations beginning on 1 March, a 3.3°C (38°F) lower threshold, and a 23.9°C (75°F) upper threshold. The revised cumulative flight model indicated field scouting to estimate moth egg density at the time of 25% flight should begin when 1,432 degree-days (2,577 degree-days °F) have accumulated. These results underscore the importance of assessing multiple parameters in phenological models and utilizing appropriate assessment methods, which in this case may allow for improved timing of field scouting for S. albicosta. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hornung, Thomas; Simon, Kai; Lausen, Georg
Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.
Seismic Interface Waves in Coastal Waters: A Review
1980-11-15
Being at the low- 4 frequency end of classical sonar activity and at the high-frequency end of seismic research, the propagation of infrasonic energy...water areas. Certainly this and other seismic detection methods will never replace the highly-developed sonar techniques but in coastal waters they...for many sonar purposes [5, 85 to 90) shows that very simple bottom models may already be sufficient to make allowance for the influence of the sea
Sampling and position effects in the Electronically Steered Thinned Array Radiometer (ESTAR)
NASA Technical Reports Server (NTRS)
Katzberg, Stephen J.
1993-01-01
A simple engineering level model of the Electronically Steered Thinned Array Radiometer (ESTAR) is developed that allows an identification of the major effects of the sampling process involved with this technique. It is shown that the ESTAR approach is sensitive to aliasing and has a highly non-uniform sensitivity profile. It is further shown that the ESTAR approach is strongly sensitive to position displacements of the low-density sampling antenna elements.
Fadel, Ali; Lemaire, Bruno J; Vinçon-Leite, Brigitte; Atoui, Ali; Slim, Kamal; Tassin, Bruno
2017-09-01
Many freshwater bodies worldwide that suffer from harmful algal blooms would benefit for their management from a simple ecological model that requires few field data, e.g. for early warning systems. Beyond a certain degree, adding processes to ecological models can reduce model predictive capabilities. In this work, we assess whether a simple ecological model without nutrients is able to describe the succession of cyanobacterial blooms of different species in a hypereutrophic reservoir and help understand the factors that determine these blooms. In our study site, Karaoun Reservoir, Lebanon, cyanobacteria Aphanizomenon ovalisporum and Microcystis aeruginosa alternatively bloom. A simple configuration of the model DYRESM-CAEDYM was used; both cyanobacteria were simulated, with constant vertical migration velocity for A. ovalisporum, with vertical migration velocity dependent on light for M. aeruginosa and with growth limited by light and temperature and not by nutrients for both species. The model was calibrated on two successive years with contrasted bloom patterns and high variations in water level. It was able to reproduce the measurements; it showed a good performance for the water level (root-mean-square error (RMSE) lower than 1 m, annual variation of 25 m), water temperature profiles (RMSE of 0.22-1.41 °C, range 13-28 °C) and cyanobacteria biomass (RMSE of 1-57 μg Chl a L -1 , range 0-206 μg Chl a L -1 ). The model also helped understand the succession of blooms in both years. The model results suggest that the higher growth rate of M. aeruginosa during favourable temperature and light conditions allowed it to outgrow A. ovalisporum. Our results show that simple model configurations can be sufficient not only for theoretical works when few major processes can be identified but also for operational applications. This approach could be transposed on other hypereutrophic lakes and reservoirs to describe the competition between dominant phytoplankton species, contribute to early warning systems or be used for management scenarios.
NASA Astrophysics Data System (ADS)
Steckloff, Jordan; Lindell, Rebecca
2016-10-01
Teaching science by having students manipulate real data is a popular trend in astronomy and planetary science education. However, many existing activities simply couple this data with traditional "cookbook" style verification labs. As with most topics within science, this instructional technique does not enhance the average students' understanding of the phenomena being studied. Here we present a methodology for developing "science by doing" activities that incorporate the latest discoveries in planetary science with up-to-date constructivist pedagogy to teach advanced concepts in Physics and Astronomy. In our methodology, students are first guided to understand, analyze, and plot real raw scientific data; develop and test physical and computational models to understand and interpret the data; finally use their models to make predictions about the topic being studied and test it with real data.To date, two activities have been developed according to this methodology: Understanding Asteroids through their Light Curves (hereafter "Asteroid Activity"), and Understanding Exoplanetary Systems through Simple Harmonic Motion (hereafter "Exoplanet Activity"). The Asteroid Activity allows students to explore light curves available on the Asteroid Light Curve Database (ALCDB) to discover general properties of asteroids, including their internal structure, strength, and mechanism of asteroid moon formation. The Exoplanet Activity allows students to investigate the masses and semi-major axes of exoplanets in a system by comparing the radial velocity motion of their host star to that of a coupled simple harmonic oscillator. Students then explore how noncircular orbits lead to deviations from simple harmonic motion. These activities will be field tested during the Fall 2016 semester in an advanced undergraduate mechanics and astronomy courses at a large Midwestern STEM-focused university. We will present the development methodologies for these activities, description of the activities, and results from the pre-tests.
Targeting excited states in all-trans polyenes with electron-pair states.
Boguslawski, Katharina
2016-12-21
Wavefunctions restricted to electron pair states are promising models for strongly correlated systems. Specifically, the pair Coupled Cluster Doubles (pCCD) ansatz allows us to accurately describe bond dissociation processes and heavy-element containing compounds with multiple quasi-degenerate single-particle states. Here, we extend the pCCD method to model excited states using the equation of motion (EOM) formalism. As the cluster operator of pCCD is restricted to electron-pair excitations, EOM-pCCD allows us to target excited electron-pair states only. To model singly excited states within EOM-pCCD, we modify the configuration interaction ansatz of EOM-pCCD to contain also single excitations. Our proposed model represents a simple and cost-effective alternative to conventional EOM-CC methods to study singly excited electronic states. The performance of the excited state models is assessed against the lowest-lying excited states of the uranyl cation and the two lowest-lying excited states of all-trans polyenes. Our numerical results suggest that EOM-pCCD including single excitations is a good starting point to target singly excited states.
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Castagné, Vincent; Moser, Paul; Roux, Sylvain; Porsolt, Roger D
2011-04-01
The development of antidepressants requires simple rodent behavioral tests for initial screening before undertaking more complex preclinical tests and clinical evaluation. Presented in the unit are two widely used screening tests used for antidepressants, the forced swim (also termed behavioral despair) test in the rat and mouse, and the tail suspension test in the mouse. These tests have good predictive validity and allow rapid and economical detection of substances with potential antidepressant-like activity. The behavioral despair and the tail suspension tests are based on the same principle: measurement of the duration of immobility when rodents are exposed to an inescapable situation. The majority of clinically used antidepressants decrease the duration of immobility. Antidepressants also increase the latency to immobility, and this additional measure can increase the sensitivity of the behavioral despair test in the mouse for certain classes of antidepressant. Testing of new substances in the behavioral despair and tail suspension tests allows a simple assessment of their potential antidepressant activity by the measurement of their effect on immobility. © 2011 by John Wiley & Sons, Inc.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Quantum decay model with exact explicit analytical solution
NASA Astrophysics Data System (ADS)
Marchewka, Avi; Granot, Er'El
2009-01-01
A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.
Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel
2015-01-01
Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by eTOXlab (web services, VM, object-oriented programming) provide an elegant solution to common practical issues; the system can be installed easily in heterogeneous environments and integrates well with other software. Moreover, the system provides a simple and safe solution for building models with confidential structures that can be shared without disclosing sensitive information.
Backup key generation model for one-time password security protocol
NASA Astrophysics Data System (ADS)
Jeyanthi, N.; Kundu, Sourav
2017-11-01
The use of one-time password (OTP) has ushered new life into the existing authentication protocols used by the software industry. It introduced a second layer of security to the traditional username-password authentication, thus coining the term, two-factor authentication. One of the drawbacks of this protocol is the unreliability of the hardware token at the time of authentication. This paper proposes a simple backup key model that can be associated with the real world applications’user database, which would allow a user to circumvent the second authentication stage, in the event of unavailability of the hardware token.
Generic model of morphological changes in growing colonies of fungi
NASA Astrophysics Data System (ADS)
López, Juan M.; Jensen, Henrik J.
2002-02-01
Fungal colonies are able to exhibit different morphologies depending on the environmental conditions. This allows them to cope with and adapt to external changes. When grown in solid or semisolid media the bulk of the colony is compact and several morphological transitions have been reported to occur as the external conditions are varied. Here we show how a unified simple mathematical model, which includes the effect of the accumulation of toxic metabolites, can account for the morphological changes observed. Our numerical results are in excellent agreement with experiments carried out with the fungus Aspergillus oryzae on solid agar.
Ultrastrong Coupling Few-Photon Scattering Theory
NASA Astrophysics Data System (ADS)
Shi, Tao; Chang, Yue; García-Ripoll, Juan José
2018-04-01
We study the scattering of individual photons by a two-level system ultrastrongly coupled to a waveguide. The scattering is elastic for a broad range of couplings and can be described with an effective U (1 )-symmetric Hamiltonian. This simple model allows the prediction of scattering resonance line shapes, validated up to α =0.3 , and close to the Toulouse point α =1 /2 , where inelastic scattering becomes relevant. Our predictions model experiments with superconducting circuits [P. Forn-Díaz et al., Nat. Phys. 13, 39 (2017), 10.1038/nphys3905] and can be extended to study multiphoton scattering.
Ultrastrong Coupling Few-Photon Scattering Theory.
Shi, Tao; Chang, Yue; García-Ripoll, Juan José
2018-04-13
We study the scattering of individual photons by a two-level system ultrastrongly coupled to a waveguide. The scattering is elastic for a broad range of couplings and can be described with an effective U(1)-symmetric Hamiltonian. This simple model allows the prediction of scattering resonance line shapes, validated up to α=0.3, and close to the Toulouse point α=1/2, where inelastic scattering becomes relevant. Our predictions model experiments with superconducting circuits [P. Forn-Díaz et al., Nat. Phys. 13, 39 (2017)NPAHAX1745-247310.1038/nphys3905] and can be extended to study multiphoton scattering.
Radiatively induced neutrino mass model with flavor dependent gauge symmetry
NASA Astrophysics Data System (ADS)
Lee, SangJong; Nomura, Takaaki; Okada, Hiroshi
2018-06-01
We study a radiative seesaw model at one-loop level with a flavor dependent gauge symmetry U(1) μ - τ, in which we consider bosonic dark matter. We also analyze the constraints from lepton flavor violations, muon g - 2, relic density of dark matter, and collider physics, and carry out numerical analysis to search for allowed parameter region which satisfy all the constraints and to investigate some predictions. Furthermore we find that a simple but adhoc hypothesis induces specific two zero texture with inverse mass matrix, which provides us several predictions such as a specific pattern of Dirac CP phase.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.
1993-01-01
Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.
NASA Astrophysics Data System (ADS)
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
Burgess, C. P.; Holman, R.; Tasinato, G.
2016-01-26
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, C. P.; Holman, R.; Tasinato, G.
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?
Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R
2014-01-08
How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.
Contact resonances of U-shaped atomic force microscope probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezaei, E.; Turner, J. A., E-mail: jaturner@unl.edu
Recent approaches used to characterize the elastic or viscoelastic properties of materials with nanoscale resolution have focused on the contact resonances of atomic force microscope (CR-AFM) probes. The experiments for these CR-AFM methods involve measurement of several contact resonances from which the resonant frequency and peak width are found. The contact resonance values are then compared with the noncontact values in order for the sample properties to be evaluated. The data analysis requires vibration models associated with the probe during contact in order for the beam response to be deconvolved from the measured spectra. To date, the majority of CR-AFMmore » research has used rectangular probes that have a relatively simple vibration response. Recently, U-shaped AFM probes have created much interest because they allow local sample heating. However, the vibration response of these probes is much more complex such that CR-AFM is still in its infancy. In this article, a simplified analytical model of U-shaped probes is evaluated for contact resonance applications relative to a more complex finite element (FE) computational model. The tip-sample contact is modeled using three orthogonal Kelvin-Voigt elements such that the resonant frequency and peak width of each mode are functions of the contact conditions. For the purely elastic case, the frequency results of the simple model are within 8% of the FE model for the lowest six modes over a wide range of contact stiffness values. Results for the viscoelastic contact problem for which the quality factor of the lowest six modes is compared show agreement to within 13%. These results suggest that this simple model can be used effectively to evaluate CR-AFM experimental results during AFM scanning such that quantitative mapping of viscoelastic properties may be possible using U-shaped probes.« less
SBMLeditor: effective creation of models in the Systems Biology Markup Language (SBML)
Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas
2007-01-01
Background The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. Results SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. Conclusion SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors. PMID:17341299
SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).
Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas
2007-03-06
The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.
Khatri, Bhavin S.; Goldstein, Richard A.
2015-01-01
Speciation is fundamental to understanding the huge diversity of life on Earth. Although still controversial, empirical evidence suggests that the rate of speciation is larger for smaller populations. Here, we explore a biophysical model of speciation by developing a simple coarse-grained theory of transcription factor-DNA binding and how their co-evolution in two geographically isolated lineages leads to incompatibilities. To develop a tractable analytical theory, we derive a Smoluchowski equation for the dynamics of binding energy evolution that accounts for the fact that natural selection acts on phenotypes, but variation arises from mutations in sequences; the Smoluchowski equation includes selection due to both gradients in fitness and gradients in sequence entropy, which is the logarithm of the number of sequences that correspond to a particular binding energy. This simple consideration predicts that smaller populations develop incompatibilities more quickly in the weak mutation regime; this trend arises as sequence entropy poises smaller populations closer to incompatible regions of phenotype space. These results suggest a generic coarse-grained approach to evolutionary stochastic dynamics, allowing realistic modelling at the phenotypic level. PMID:25936759
A predictive analytic model for the solar modulation of cosmic rays
Cholis, Ilias; Hooper, Dan; Linden, Tim
2016-02-23
An important factor limiting our ability to understand the production and propagation of cosmic rays pertains to the effects of heliospheric forces, commonly known as solar modulation. The solar wind is capable of generating time- and charge-dependent effects on the spectrum and intensity of low-energy (≲10 GeV) cosmic rays reaching Earth. Previous analytic treatments of solar modulation have utilized the force-field approximation, in which a simple potential is adopted whose amplitude is selected to best fit the cosmic-ray data taken over a given period of time. Making use of recently available cosmic-ray data from the Voyager 1 spacecraft, along withmore » measurements of the heliospheric magnetic field and solar wind, we construct a time-, charge- and rigidity-dependent model of solar modulation that can be directly compared to data from a variety of cosmic-ray experiments. Here, we provide a simple analytic formula that can be easily utilized in a variety of applications, allowing us to better predict the effects of solar modulation and reduce the number of free parameters involved in cosmic-ray propagation models.« less
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Luminance, Colour, Viewpoint and Border Enhanced Disparity Energy Model
Martins, Jaime A.; Rodrigues, João M. F.; du Buf, Hans
2015-01-01
The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas. PMID:26107954
Simple biophysical model of tumor evasion from immune system control
NASA Astrophysics Data System (ADS)
D'Onofrio, Alberto; Ciancio, Armando
2011-09-01
The competitive nonlinear interplay between a tumor and the host's immune system is not only very complex but is also time-changing. A fundamental aspect of this issue is the ability of the tumor to slowly carry out processes that gradually allow it to become less harmed and less susceptible to recognition by the immune system effectors. Here we propose a simple epigenetic escape mechanism that adaptively depends on the interactions per time unit between cells of the two systems. From a biological point of view, our model is based on the concept that a tumor cell that has survived an encounter with a cytotoxic T-lymphocyte (CTL) has an information gain that it transmits to the other cells of the neoplasm. The consequence of this information increase is a decrease in both the probabilities of being killed and of being recognized by a CTL. We show that the mathematical model of this mechanism is formally equal to an evolutionary imitation game dynamics. Numerical simulations of transitory phases complement the theoretical analysis. Implications of the interplay between the above mechanisms and the delivery of immunotherapies are also illustrated.
Fish robotics and hydrodynamics
NASA Astrophysics Data System (ADS)
Lauder, George
2010-11-01
Studying the fluid dynamics of locomotion in freely-swimming fishes is challenging due to difficulties in controlling fish behavior. To provide better control over fish-like propulsive systems we have constructed a variety of fish-like robotic test platforms that range from highly biomimetic models of fins, to simple physical models of body movements during aquatic locomotion. First, we have constructed a series of biorobotic models of fish pectoral fins with 5 fin rays that allow detailed study of fin motion, forces, and fluid dynamics associated with fin-based locomotion. We find that by tuning fin ray stiffness and the imposed motion program we can produce thrust both on the fin outstroke and instroke. Second, we are using a robotic flapping foil system to study the self-propulsion of flexible plastic foils of varying stiffness, length, and trailing edge shape as a means of investigating the fluid dynamic effect of simple changes in the properties of undulating bodies moving through water. We find unexpected non-linear stiffness-dependent effects of changing foil length on self-propelled speed, and as well as significant effects of trailing edge shape on foil swimming speed.
Basic elements of light water reactor fuel rod design. [FUELROD code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisman, J.; Eckart, R.
1981-06-01
Basic design techniques and equations are presented to allow students to understand and perform preliminary fuel design for normal reactor conditions. Each of the important design considerations is presented and discussed in detail. These include the interaction between fuel pellets and cladding and the changes in fuel and cladding that occur during the operating lifetime of the fuel. A simple, student-oriented, fuel rod design computer program, called FUELROD, is described. The FUELROD program models the in-pile pellet cladding interaction and allows a realistic exploration of the effect of various design parameters. By use of FUELROD, the student can gain anmore » appreciation of the fuel rod design process. 34 refs.« less
Observation of optically induced feshbach resonances in collisions of cold atoms
Fatemi; Jones; Lett
2000-11-20
We have observed optically induced Feshbach resonances in a cold ( <1 mK) sodium vapor. The optical coupling of the ground and excited-state potentials changes the scattering properties of an ultracold gas in much the same way as recently observed magnetically induced Feshbach resonances, but allows for some experimental conveniences associated with using lasers. The scattering properties can be varied by changing either the intensity or the detuning of a laser tuned near a photoassociation transition to a molecular state in the dimer. In principle this method allows the scattering length of any atomic species to be altered. A simple model is used to fit the dispersive resonance line shapes.
Cosmic Star Formation: A Simple Model of the SFRD(z)
NASA Astrophysics Data System (ADS)
Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria
2017-12-01
We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).
A methodological approach for using high-level Petri Nets to model the immune system response.
Pennisi, Marzio; Cavalieri, Salvatore; Motta, Santo; Pappalardo, Francesco
2016-12-22
Mathematical and computational models showed to be a very important support tool for the comprehension of the immune system response against pathogens. Models and simulations allowed to study the immune system behavior, to test biological hypotheses about diseases and infection dynamics, and to improve and optimize novel and existing drugs and vaccines. Continuous models, mainly based on differential equations, usually allow to qualitatively study the system but lack in description; conversely discrete models, such as agent based models and cellular automata, permit to describe in detail entities properties at the cost of losing most qualitative analyses. Petri Nets (PN) are a graphical modeling tool developed to model concurrency and synchronization in distributed systems. Their use has become increasingly marked also thanks to the introduction in the years of many features and extensions which lead to the born of "high level" PN. We propose a novel methodological approach that is based on high level PN, and in particular on Colored Petri Nets (CPN), that can be used to model the immune system response at the cellular scale. To demonstrate the potentiality of the approach we provide a simple model of the humoral immune system response that is able of reproducing some of the most complex well-known features of the adaptive response like memory and specificity features. The methodology we present has advantages of both the two classical approaches based on continuous and discrete models, since it allows to gain good level of granularity in the description of cells behavior without losing the possibility of having a qualitative analysis. Furthermore, the presented methodology based on CPN allows the adoption of the same graphical modeling technique well known to life scientists that use PN for the modeling of signaling pathways. Finally, such an approach may open the floodgates to the realization of multi scale models that integrate both signaling pathways (intra cellular) models and cellular (population) models built upon the same technique and software.
Fiske, Ian J.; Royle, J. Andrew; Gross, Kevin
2014-01-01
Ecologists and wildlife biologists increasingly use latent variable models to study patterns of species occurrence when detection is imperfect. These models have recently been generalized to accommodate both a more expansive description of state than simple presence or absence, and Markovian dynamics in the latent state over successive sampling seasons. In this paper, we write these multi-season, multi-state models as hidden Markov models to find both maximum likelihood estimates of model parameters and finite-sample estimators of the trajectory of the latent state over time. These estimators are especially useful for characterizing population trends in species of conservation concern. We also develop parametric bootstrap procedures that allow formal inference about latent trend. We examine model behavior through simulation, and we apply the model to data from the North American Amphibian Monitoring Program.
Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta
NASA Astrophysics Data System (ADS)
Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu
2017-09-01
The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Measuring and modeling the oxygen profile in a nitrifying Moving Bed Biofilm Reactor.
Masić, Alma; Bengtsson, Jessica; Christensson, Magnus
2010-09-01
In this paper we determine the oxygen profile in a biofilm on suspended carriers in two ways: firstly by microelectrode measurements and secondly by a simple mathematical model. The Moving Bed Biofilm Reactor is well-established for wastewater treatment where bacteria grow as a biofilm on the protective surfaces of suspended carriers. The flat shaped BiofilmChip P was developed to allow good conditions for transport of substrates into the biofilm. The oxygen profile was measured in situ the nitrifying biofilm with a microelectrode and it was simulated with a one-dimensional mathematical model. We extended the model by adding a CSTR equation, to connect the reactor to the biofilm through the boundary conditions. We showed the dependence of the thickness of the mass transfer boundary layer on the bulk flow rate. Finally, we estimated the erosion parameter lambda to increase the concordance between the measured and simulated profiles. This lead to a simple empirical relationship between lambda and the flow rate. The data gathered by in situ microelectrode measurements can, together with the mathematical model, be used in predictive modeling and give more insight in the design of new carriers, with the ambition of making process operation more energy efficient. Copyright 2010 Elsevier Inc. All rights reserved.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Modeling and experimental characterization of electromigration in interconnect trees
NASA Astrophysics Data System (ADS)
Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.
1999-11-01
Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
A simple experimental method to study depigmenting agents.
Abella, M L; de Rigal, J; Neveux, S
2007-08-01
The first objective of the study was to verify that a controlled UV exposure of four areas of the forearms together with randomized product application enabled to compare treatment efficacy and then to compare the depigmenting efficacy of different products with a simple experimental method. Sixteen volunteers received 0.7 minimal erythermal dose for four consecutive days. Products tested were ellagic acid (0.5%), vitamin C (5%) and C8-LHA (2%). Product application started 72 h post last exposure, was repeated for 42 days, the control zone being exposed, non-treated. Colour measurements included Chromameter, Chromasphere, Spectro-colorimeter and visual assessment. Comparison of colour values at day 1 and at day 7 showed that all zones were comparably tanned, allowing a rigorous comparison of the treatments. We report a new simple experimental model, which enables the rapid comparison of different depigmenting products. The efficacy and good tolerance of C8-LHA make it an excellent candidate for the treatment of hyperpigmentory disorders.
An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers.
Zapateiro De la Hoz, Mauricio; Acho, Leonardo; Vidal, Yolanda
2015-01-01
Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes.
Rotation and plasma stability in the Fitzpatrick-Aydemir model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pustovitov, V. D.
2007-08-15
The rotational stabilization of the resistive wall modes (RWMs) is analyzed within the single-mode cylindrical Fitzpatrick-Aydemir model [R. Fitzpatrick, Phys. Plasmas 9, 3459 (2002)]. Here, the consequences of the Fitzpatrick-Aydemir dispersion relation are derived in terms of the observable growth rate and toroidal rotation frequency of the mode, which allows easy interpretation of the results and comparison with experimental observations. It is shown that this model, developed for the plasma with weak dissipation, predicts the rotational destabilization of RWM in the typical range of the RWM rotation. The model predictions are compared with those obtained in a similar model, butmore » with the Boozer boundary conditions at the plasma surface [A. H. Boozer, Phys. Plasmas 11, 110 (2004)]. Simple experimental tests of the model are proposed.« less
Simulation of semi-explicit mechanisms of SOA formation from glyoxal in a 3D model
NASA Astrophysics Data System (ADS)
Knote, C. J.; Hodzic, A.; Jimenez, J. L.; Volkamer, R.; Orlando, J. J.; Baidar, S.; Brioude, J. F.; Fast, J. D.; Gentner, D. R.; Goldstein, A. H.; Hayes, P. L.; Knighton, W. B.; Oetjen, H.; Setyan, A.; Stark, H.; Thalman, R. M.; Tyndall, G. S.; Washenfelder, R. A.; Waxman, E.; Zhang, Q.
2013-12-01
Formation of secondary organic aerosols (SOA) through multi-phase processing of glyoxal has been proposed recently as a relevant contributor to SOA mass. Glyoxal has both anthropogenic and biogenic sources, and readily partitions into the aqueous-phase of cloud droplets and aerosols. Both reversible and irreversible chemistry in the liquid-phase has been observed. A recent laboratory study indicates that the presence of salts in the liquid-phase strongly enhances the Henry';s law constant of glyoxal, allowing for much more effective multi-phase processing. In our work we investigate the contribution of glyoxal to SOA formation on the regional scale. We employ the regional chemistry transport model WRF-chem with MOZART gas-phase chemistry and MOSAIC aerosols, which we both extended to improve the description of glyoxal formation in the gas-phase, and its interactions with aerosols. The detailed description of aerosols in our setup allows us to compare very simple (uptake coefficient) parameterizations of SOA formation from glyoxal, as has been used in previous modeling studies, with much more detailed descriptions of the various pathways postulated based on laboratory studies. Measurements taken during the CARES and CalNex campaigns in California in summer 2010 allowed us to constrain the model, including the major direct precursors of glyoxal. Simulations at convection-permitting resolution over a 2 week period in June 2010 have been conducted to assess the effect of the different ways to parameterize SOA formation from glyoxal and investigate its regional variability. We find that depending on the parameterization used the contribution of glyoxal to SOA is between 1 and 15% in the LA basin during this period, and that simple parameterizations based on uptake coefficients derived from box model studies lead to higher contributions (15%) than parameterizations based on lab experiments (1%). A kinetic limitation found in experiments hinders substantial contribution of volume-based pathways to total SOA formation from glyoxal. Once removed, 5% of total SOA can be formed from glyoxal through these channels. Results from a year-long simulation over the continental US will give a broader picture of the contribution of glyoxal to SOA formation.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Model-based Systems Engineering: Creation and Implementation of Model Validation Rules for MOS 2.0
NASA Technical Reports Server (NTRS)
Schmidt, Conrad K.
2013-01-01
Model-based Systems Engineering (MBSE) is an emerging modeling application that is used to enhance the system development process. MBSE allows for the centralization of project and system information that would otherwise be stored in extraneous locations, yielding better communication, expedited document generation and increased knowledge capture. Based on MBSE concepts and the employment of the Systems Modeling Language (SysML), extremely large and complex systems can be modeled from conceptual design through all system lifecycles. The Operations Revitalization Initiative (OpsRev) seeks to leverage MBSE to modernize the aging Advanced Multi-Mission Operations Systems (AMMOS) into the Mission Operations System 2.0 (MOS 2.0). The MOS 2.0 will be delivered in a series of conceptual and design models and documents built using the modeling tool MagicDraw. To ensure model completeness and cohesiveness, it is imperative that the MOS 2.0 models adhere to the specifications, patterns and profiles of the Mission Service Architecture Framework, thus leading to the use of validation rules. This paper outlines the process by which validation rules are identified, designed, implemented and tested. Ultimately, these rules provide the ability to maintain model correctness and synchronization in a simple, quick and effective manner, thus allowing the continuation of project and system progress.
The History and Implications of Design Standards for Underwater Breathing Apparatus - 1954 to 2015
2015-02-11
respiratory loading using both simple models of fluid mechanics and experimental evidence. An understanding of the influence of both respiratory ventilatory... fluid dynamics of flow in divers’ airways. It allows testing laboratories to make maximum use of all of their testing data, and lo present that data in...tireless efforts of numerous military divers at Navy Experimental Diving Unit in Panama City, FL and Naval Medical Research Institute, Bethesda, MD
Fire Safety Analysis of the Polar Icebreaker Replacement Design. Volume 2
1987-10-01
report. ; iote : At t tne -3f incident only five or sx men were aboard: therefore, they could not atterrot to attack a fire of this intensmtp t hemse I...fire extinguisher (PKP) AUTOMATIC: A1301 Halon 1301 total flooding system - remotely actuated AF AFFF (3%) sprinkler system - remotely actuated AFM...simulate wind effects, we have found that its judicious use along with the vent and shaft routines allows for the modelling of simple HVAC systems
Interactive multimedia demonstrations for teaching fluid dynamics
NASA Astrophysics Data System (ADS)
Rowley, Clarence
2008-11-01
We present a number of multimedia tools, developed by undergraduates, for teaching concepts from introductory fluid mechanics. Short movies are presented, illustrating concepts such as hydrostatic pressure, the no-slip condition, boundary layers, and surface tension. In addition, we present a number of interactive demonstrations, which allow the user to interact with a simple model of a given concept via a web browser, and compare with experimental data. In collaboration with Mack Pasqual and Lindsey Brown, Princeton University.
Improved color constancy in honey bees enabled by parallel visual projections from dorsal ocelli.
Garcia, Jair E; Hung, Yu-Shan; Greentree, Andrew D; Rosa, Marcello G P; Endler, John A; Dyer, Adrian G
2017-07-18
How can a pollinator, like the honey bee, perceive the same colors on visited flowers, despite continuous and rapid changes in ambient illumination and background color? A hundred years ago, von Kries proposed an elegant solution to this problem, color constancy, which is currently incorporated in many imaging and technological applications. However, empirical evidence on how this method can operate on animal brains remains tenuous. Our mathematical modeling proposes that the observed spectral tuning of simple ocellar photoreceptors in the honey bee allows for the necessary input for an optimal color constancy solution to most natural light environments. The model is fully supported by our detailed description of a neural pathway allowing for the integration of signals originating from the ocellar photoreceptors to the information processing regions in the bee brain. These findings reveal a neural implementation to the classic color constancy problem that can be easily translated into artificial color imaging systems.
High-Speed Video Analysis in a Conceptual Physics Class
NASA Astrophysics Data System (ADS)
Desbien, Dwain M.
2011-09-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.
PRINT: A Protein Bioconjugation Method with Exquisite N-terminal Specificity
Sur, Surojit; Qiao, Yuan; Fries, Anja; O’Meally, Robert N.; Cole, Robert N.; Kinzler, Kenneth W.; Vogelstein, Bert; Zhou, Shibin
2015-01-01
Chemical conjugation is commonly used to enhance the pharmacokinetics, biodistribution, and potency of protein therapeutics, but often leads to non-specific modification or loss of bioactivity. Here, we present a simple, versatile and widely applicable method that allows exquisite N-terminal specific modification of proteins. Combining reversible side-chain blocking and protease mediated cleavage of a commonly used HIS tag appended to a protein, we generate with high yield and purity exquisitely site specific and selective bio-conjugates of TNF-α by using amine reactive NHS ester chemistry. We confirm the N terminal selectivity and specificity using mass spectral analyses and show near complete retention of the biological activity of our model protein both in vitro and in vivo murine models. We believe that this methodology would be applicable to a variety of potentially therapeutic proteins and the specificity afforded by this technique would allow for rapid generation of novel biologics. PMID:26678960
Vertical-probe-induced asymmetric dust oscillation in complex plasma.
Harris, B J; Matthews, L S; Hyde, T W
2013-05-01
A complex plasma vertical oscillation experiment which modifies the bulk is presented. Spherical, micron-sized particles within a Coulomb crystal levitated in the sheath above the powered lower electrode in a GEC reference cell are perturbed using a probe attached to a Zyvex S100 Nanomanipulator. By oscillating the probe potential sinusoidally, particle motion is found to be asymmetric, exhibiting superharmonic response in one case. Using a simple electric field model for the plasma sheath, including a nonzero electric field at the sheath edge, dust particle charges are found by employing a balance of relevant forces and emission analysis. Adjusting the parameters of the electric field model allowed the change predicted in the levitation height to be compared with experiment. A discrete oscillator Green's function is applied using the derived force, which accurately predicts the particle's motion and allows the determination of the electric field at the sheath edge.
Brunelle, Marie-Noëlle; Brakier-Gingras, Léa; Lemay, Guy
2003-01-01
Retroviruses use unusual recoding strategies to synthesize the Gag-Pol polyprotein precursor of viral enzymes. In human immunodeficiency virus, ribosomes translating full-length viral RNA can shift back by 1 nucleotide at a specific site defined by the presence of both a slippery sequence and a downstream stimulatory element made of an extensive secondary structure. This so-called frameshift mechanism could become a target for the development of novel antiviral strategies. A different recoding strategy is used by other retroviruses, such as murine leukemia viruses, to synthesize the Gag-Pol precursor; in this case, a stop codon is suppressed in a readthrough process, again due to the presence of a specific structure adopted by the mRNA. Development of antiframeshift agents will greatly benefit from the availability of a simple animal and virus model. For this purpose, the murine leukemia virus readthrough region was rendered inactive by mutagenesis and the frameshift region of human immunodeficiency virus was inserted to generate a chimeric provirus. This substitution of readthrough by frameshift allows the synthesis of viral proteins, and the chimeric provirus sequence was found to generate infectious viruses. This system could be a most interesting alternative to study ribosomal frameshift in the context of a virus amenable to the use of a simple animal model. PMID:12584361
Simple and Flexible Self-Reproducing Structures in Asynchronous Cellular Automata and Their Dynamics
NASA Astrophysics Data System (ADS)
Huang, Xin; Lee, Jia; Yang, Rui-Long; Zhu, Qing-Sheng
2013-03-01
Self-reproduction on asynchronous cellular automata (ACAs) has attracted wide attention due to the evident artifacts induced by synchronous updating. Asynchronous updating, which allows cells to undergo transitions independently at random times, might be more compatible with the natural processes occurring at micro-scale, but the dark side of the coin is the increment in the complexity of an ACA in order to accomplish stable self-reproduction. This paper proposes a novel model of self-timed cellular automata (STCAs), a special type of ACAs, where unsheathed loops are able to duplicate themselves reliably in parallel. The removal of sheath cannot only allow various loops with more flexible and compact structures to replicate themselves, but also reduce the number of cell states of the STCA as compared to the previous model adopting sheathed loops [Y. Takada, T. Isokawa, F. Peper and N. Matsui, Physica D227, 26 (2007)]. The lack of sheath, on the other hand, often tends to cause much more complicated interactions among loops, when all of them struggle independently to stretch out their constructing arms at the same time. In particular, such intense collisions may even cause the emergence of a mess of twisted constructing arms in the cellular space. By using a simple and natural method, our self-reproducing loops (SRLs) are able to retract their arms successively, thereby disentangling from the mess successfully.
Surface acoustic wave (SAW) vibration sensors.
Filipiak, Jerzy; Solarz, Lech; Steczko, Grzegorz
2011-01-01
In the paper a feasibility study on the use of surface acoustic wave (SAW) vibration sensors for electronic warning systems is presented. The system is assembled from concatenated SAW vibration sensors based on a SAW delay line manufactured on a surface of a piezoelectric plate. Vibrations of the plate are transformed into electric signals that allow identification of the sensor and localization of a threat. The theoretical study of sensor vibrations leads us to the simple isotropic model with one degree of freedom. This model allowed an explicit description of the sensor plate movement and identification of the vibrating sensor. Analysis of frequency response of the ST-cut quartz sensor plate and a damping speed of its impulse response has been conducted. The analysis above was the basis to determine the ranges of parameters for vibrating plates to be useful in electronic warning systems. Generally, operation of electronic warning systems with SAW vibration sensors is based on the analysis of signal phase changes at the working frequency of delay line after being transmitted via two circuits of concatenated four-terminal networks. Frequencies of phase changes are equal to resonance frequencies of vibrating plates of sensors. The amplitude of these phase changes is proportional to the amplitude of vibrations of a sensor plate. Both pieces of information may be sent and recorded jointly by a simple electrical unit.
Evaluation of on-board hydrogen storage methods for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Akyurtlu, Ates; Akyurtlu, J. F.; Adeyiga, A. A.; Perdue, Samara; Northam, G. B.
1989-01-01
Hydrogen is the foremost candidate as a fuel for use in high speed transport. Since any aircraft moving at hypersonic speeds must have a very slender body, means of decreasing the storage volume requirements below that for liquid hydrogen are needed. The total performance of the hypersonic plane needs to be considered for the evaluation of candidate fuel and storage systems. To accomplish this, a simple model for the performance of a hypersonic plane is presented. To allow for the use of different engines and fuels during different phases of flight, the total trajectory is divided into three phases: subsonic-supersonic, hypersonic and rocket propulsion phase. The fuel fraction for the first phase is found be a simple energy balance using an average thrust to drag ratio for this phase. The hypersonic flight phase is investigated in more detail by taking small altitude increments. This approach allowed the use of flight profiles other than the constant dynamic pressure flight. The effect of fuel volume on drag, structural mass and tankage mass was introduced through simplified equations involving the characteristic dimension of the plane. The propellant requirement for the last phase is found by employing the basic rocket equations. The candidate fuel systems such as the cryogenic fuel combinations and solid and liquid endothermic hydrogen generators are first screened thermodynamically with respect to their energy densities and cooling capacities and then evaluated using the above model.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
NMR signals within the generalized Langevin model for fractional Brownian motion
NASA Astrophysics Data System (ADS)
Lisý, Vladimír; Tóthová, Jana
2018-03-01
The methods of Nuclear Magnetic Resonance belong to the best developed and often used tools for studying random motion of particles in different systems, including soft biological tissues. In the long-time limit the current mathematical description of the experiments allows proper interpretation of measurements of normal and anomalous diffusion. The shorter-time dynamics is however correctly considered only in a few works that do not go beyond the standard memoryless Langevin description of the Brownian motion (BM). In the present work, the attenuation function S (t) for an ensemble of spin-bearing particles in a magnetic-field gradient, expressed in a form applicable for any kind of stationary stochastic dynamics of spins with or without a memory, is calculated in the frame of the model of fractional BM. The solution of the model for particles trapped in a harmonic potential is obtained in an exceedingly simple way and used for the calculation of S (t). In the limit of free particles coupled to a fractal heat bath, the results compare favorably with experiments acquired in human neuronal tissues. The effect of the trap is demonstrated by introducing a simple model for the generalized diffusion coefficient of the particle.
Spatial Evolution of Human Dialects
NASA Astrophysics Data System (ADS)
Burridge, James
2017-07-01
The geographical pattern of human dialects is a result of history. Here, we formulate a simple spatial model of language change which shows that the final result of this historical evolution may, to some extent, be predictable. The model shows that the boundaries of language dialect regions are controlled by a length minimizing effect analogous to surface tension, mediated by variations in population density which can induce curvature, and by the shape of coastline or similar borders. The predictability of dialect regions arises because these effects will drive many complex, randomized early states toward one of a smaller number of stable final configurations. The model is able to reproduce observations and predictions of dialectologists. These include dialect continua, isogloss bundling, fanning, the wavelike spread of dialect features from cities, and the impact of human movement on the number of dialects that an area can support. The model also provides an analytical form for Séguy's curve giving the relationship between geographical and linguistic distance, and a generalization of the curve to account for the presence of a population center. A simple modification allows us to analytically characterize the variation of language use by age in an area undergoing linguistic change.
Using energy budgets to combine ecology and toxicology in a mammalian sentinel species
NASA Astrophysics Data System (ADS)
Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune
2017-04-01
Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.
NASA Astrophysics Data System (ADS)
Lechner, H. N.; Waite, G. P.; Wauthier, D. C.; Escobar-Wolf, R. P.; Lopez-Hetland, B.
2017-12-01
Geodetic data from an eight-station GPS network at Pacaya volcano Guatemala allows us to produce a simple analytical model of deformation sources associated with the 2010 eruption and the eruptive period in 2013-2014. Deformation signals for both eruptive time-periods indicate downward vertical and outward horizontal motion at several stations surrounding the volcano. The objective of this research was to better understand the magmatic plumbing system and sources of this deformation. Because this down-and-out displacement is difficult to explain with a single source, we chose a model that includes a combination of a dike and spherical source. Our modelling suggests that deformation is dominated the inflation of a shallow dike seated high within the volcanic edifice and deflation of a deeper, spherical source below the SW flank of the volcano. The source parameters for the dike feature are in good agreement with the observed orientation of recent vent emplacements on the edifice as well the horizontal displacement, while the parameters for a deeper spherical source accommodate the downward vertical motion. This study presents GPS observations at Pacaya dating back to 2009 and provides a glimpse of simple models of possible deformation sources.
Thermal dark matter through the Dirac neutrino portal
NASA Astrophysics Data System (ADS)
Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es
2018-04-01
We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Integrating individual movement behaviour into dispersal functions.
Heinz, Simone K; Wissel, Christian; Conradt, Larissa; Frank, Karin
2007-04-21
Dispersal functions are an important tool for integrating dispersal into complex models of population and metapopulation dynamics. Most approaches in the literature are very simple, with the dispersal functions containing only one or two parameters which summarise all the effects of movement behaviour as for example different movement patterns or different perceptual abilities. The summarising nature of these parameters makes assessing the effect of one particular behavioural aspect difficult. We present a way of integrating movement behavioural parameters into a particular dispersal function in a simple way. Using a spatial individual-based simulation model for simulating different movement behaviours, we derive fitting functions for the functional relationship between the parameters of the dispersal function and several details of movement behaviour. This is done for three different movement patterns (loops, Archimedean spirals, random walk). Additionally, we provide measures which characterise the shape of the dispersal function and are interpretable in terms of landscape connectivity. This allows an ecological interpretation of the relationships found.
Exploring the neural bases of goal-directed motor behavior using fully resolved simulations
NASA Astrophysics Data System (ADS)
Patel, Namu; Patankar, Neelesh A.
2016-11-01
Undulatory swimming is an ideal problem for understanding the neural architecture for motor control and movement; a vertebrate's robust morphology and adaptive locomotive gait allows the swimmer to navigate complex environments. Simple mathematical models for neurally activated muscle contractions have been incorporated into a swimmer immersed in fluid. Muscle contractions produce bending moments which determine the swimming kinematics. The neurobiology of goal-directed locomotion is explored using fast, efficient, and fully resolved constraint-based immersed boundary simulations. Hierarchical control systems tune the strength, frequency, and duty cycle for neural activation waves to produce multifarious swimming gaits or synergies. Simulation results are used to investigate why the basal ganglia and other control systems may command a particular neural pattern to accomplish a task. Using simple neural models, the effect of proprioceptive feedback on refining the body motion is demonstrated. Lastly, the ability for a learned swimmer to successfully navigate a complex environment is tested. This work is supported by NSF CBET 1066575 and NSF CMMI 0941674.
El Rassy, H; Perrard, A; Pierre, A C
2003-03-03
Highly porous silica aerogels with differing balances of hydrophobic and hydrophilic functionalities were studied as a new immobilization medium for enzymes. Two types of lipases from Candida rugosa and Burkholderia cepacia were homogeneously dispersed in wet gel precursors before gelation. The materials obtained were compared in a simple model reaction: transesterification of vinyl laurate by 1-octanol. To allow a better comparison of the hydrophobic/hydrophilic action of the solid, very open aerogel networks with traditional organic hydrophobic/hydrophilic liquid solvents, this reaction was studied in mixtures containing different proportions of 2-methyl-2-butanol, isooctane, and water. The results are discussed in relation to the porous and hydrophobic nature of aerogels, characterized by nitrogen adsorption. It was found that silica aerogels can be considered as "solid" solvents for the enzymes, able to provide hydrophobic/hydrophilic characteristics different from those prevailing in the liquid surrounding the aerogels. A simple mechanism of action for these aerogel networks is proposed.
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
2014-01-01
Background The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. Methods An epidemic is characterized trough an individual–based–model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. Results A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. Conclusions The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the desease. PMID:24725804
López, Leonardo; Burguerner, Germán; Giovanini, Leonardo
2014-04-12
The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the desease.
NASA Astrophysics Data System (ADS)
Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.
2017-12-01
Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.
A simple technique of laparoscopic port closure allowing wound extension.
Christey, G R; Poole, G
2002-04-01
Reliable and safe access to the abdominal cavity and efficient removal of the resected gallbladder are essential to laparoscopic cholecystectomy. The unpredictable size of the cholecystectomy specimen can sometimes lead to frustration at the time of removal. A simple technique has been developed that allows for tissue extraction and easy fascial closure regardless of the size of the specimen. This is achieved by using a four bite "U-shaped" purse string at the time of Hasson insertion, with cephalad advancement of the proximal two bites. This allows for variable wound extension and secure closure, without the need for additional sutures.
McNamara, C; Naddy, B; Rohan, D; Sexton, J
2003-10-01
The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.
The viscosity of magmatic silicate liquids: A model for calculation
NASA Technical Reports Server (NTRS)
Bottinga, Y.; Weill, D. F.
1971-01-01
A simple model has been designed to allow reasonably accurate calculations of viscosity as a function of temperature and composition. The problem of predicting viscosities of anhydrous silicate liquids has been investigated since such viscosity numbers are applicable to many extrusive melts and to nearly dry magmatic liquids in general. The fluidizing action of water dissolved in silicate melts is well recognized and it is now possible to predict the effect of water content on viscosity in a semiquantitative way. Water was not incorporated directly into the model. Viscosities of anhydrous compositions were calculated, and, where necessary, the effect of added water and estimated. The model can be easily modified to incorporate the effect of water whenever sufficient additional data are accumulated.
Soft modes in the perceptron model for jamming.
NASA Astrophysics Data System (ADS)
Franz, Silvio
I will show how a well known neural network model \\x9Dthe perceptro provides a simple solvable model of glassy behavior and jamming. The glassy minima of the energy function of this model can be studied in full analytic detail. This allows the identification of two kind of soft modes the first ones associated to the existence a marginal glass phase and a hierarchical structure of the energy landscape, the second ones associated to isostaticity and marginality of jamming. These results highlight the universality of the spectrum of normal modes in disordered systems, and open the way toward a detailed analytical understanding of the vibrational spectrum of low-temperature glasses. This work was supported by a Grant from the Simons Foundation (454941 to Silvio Franz).
Theoretical study of reactive and nonreactive turbulent coaxial jets
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Wakelyn, N. T.
1976-01-01
The hydrodynamic properties and the reaction kinetics of axisymmetric coaxial turbulent jets having steady mean quantities are investigated. From the analysis, limited to free turbulent boundary layer mixing of such jets, it is found that the two-equation model of turbulence is adequate for most nonreactive flows. For the reactive flows, where an allowance must be made for second order correlations of concentration fluctuations in the finite rate chemistry for initially inhomogeneous mixture, an equation similar to the concentration fluctuation equation of a related model is suggested. For diffusion limited reactions, the eddy breakup model based on concentration fluctuations is found satisfactory and simple to use. The theoretical results obtained from these various models are compared with some of the available experimental data.
Vapor mediated droplet interactions - models and mechanisms (Part 2)
NASA Astrophysics Data System (ADS)
Benusiglio, Adrien; Cira, Nate; Prakash, Manu
2014-11-01
When deposited on clean glass a two-component binary mixture of propylene glycol and water is energetically inclined to spread, as both pure liquids do. Instead the mixture forms droplets stabilized by evaporation induced surface tension gradients, giving them unique properties such as negligible hysteresis. When two of these special droplets are deposited several radii apart they attract each other. The vapor from one droplet destabilizes the other, resulting in an attraction force which brings both droplets together. We present a flux-based model for droplet stabilization and a model which connects the vapor profile to net force. These simple models capture the static and dynamic experimental trends, and our fundamental understanding of these droplets and their interactions allowed us to build autonomous fluidic machines.
Electro-Optic Quantum Memory for Light Using Two-Level Atoms
NASA Astrophysics Data System (ADS)
Hétet, G.; Longdell, J. J.; Alexander, A. L.; Lam, P. K.; Sellars, M. J.
2008-01-01
We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.
Versatile microrobotics using simple modular subunits
NASA Astrophysics Data System (ADS)
Cheang, U. Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun
2016-07-01
The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size.
Versatile microrobotics using simple modular subunits
Cheang, U Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun
2016-01-01
The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size. PMID:27464852
Electro-optic quantum memory for light using two-level atoms.
Hétet, G; Longdell, J J; Alexander, A L; Lam, P K; Sellars, M J
2008-01-18
We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.
Onion-shell model for cosmic ray electrons and radio synchrotron emission in supernova remnants
NASA Technical Reports Server (NTRS)
Beck, R.; Drury, L. O.; Voelk, H. J.; Bogdan, T. J.
1985-01-01
The spectrum of cosmic ray electrons, accelerated in the shock front of a supernova remnant (SNR), is calculated in the test-particle approximation using an onion-shell model. Particle diffusion within the evolving remnant is explicity taken into account. The particle spectrum becomes steeper with increasing radius as well as SNR age. Simple models of the magnetic field distribution allow a prediction of the intensity and spectrum of radio synchrotron emission and their radial variation. The agreement with existing observations is satisfactory in several SNR's but fails in other cases. Radiative cooling may be an important effect, especially in SNR's exploding in a dense interstellar medium.
Capillarity theory for the fly-casting mechanism
Trizac, Emmanuel; Levy, Yaakov; Wolynes, Peter G.
2010-01-01
Biomolecular folding and function are often coupled. During molecular recognition events, one of the binding partners may transiently or partially unfold, allowing more rapid access to a binding site. We describe a simple model for this fly-casting mechanism based on the capillarity approximation and polymer chain statistics. The model shows that fly casting is most effective when the protein unfolding barrier is small and the part of the chain which extends toward the target is relatively rigid. These features are often seen in known examples of fly casting in protein–DNA binding. Simulations of protein–DNA binding based on well-funneled native-topology models with electrostatic forces confirm the trends of the analytical theory. PMID:20133683
MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.
Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan
2017-01-01
Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.
Human-machine interaction to disambiguate entities in unstructured text and structured datasets
NASA Astrophysics Data System (ADS)
Ward, Kevin; Davenport, Jack
2017-05-01
Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce questionable results with no way for users to vet results, correct mistakes or influence the algorithm's future results. We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and correct the model to produce a system they can trust enables them to better focus their time on producing higher quality analysis products.
Laser Scanning System for Pressure and Temperature Paints
NASA Technical Reports Server (NTRS)
Sullivan, John
1997-01-01
Acquiring pressure maps of aerodynamic surfaces is very important for improving and validating the performance of aerospace vehicles. Traditional pressure measurements are taken with pressure taps embedded in the model surface that are connected to transducers. While pressure taps allow highly accurate measurements to be acquired, they do have several drawbacks. Pressure taps do not give good spatial resolution due to the need for individual pressure tubes, compounded by limited space available inside models. Also, building a model proves very costly if taps are needed because of the large amount of labor necessary to drill, connect and test each one. The typical cost to install one tap is about $200. Recently, a new method for measuring pressure on aerodynamic surfaces has been developed utilizing a technology known as pressure sensitive paints (PSP). Using PSP, pressure distributions can be acquired optically with high spatial resolution and simple model preparation. Flow structures can be easily visualized using PSP, but are missed using low spatial resolution arrays of pressure taps. PSP even allows pressure distributions to be found on rotating machinery where previously this has been extremely difficult or even impossible. The goal of this research is to develop a laser scanning system for use with pressure sensitive paints that allows accurate pressure measurements to be obtained on various aerodynamic surfaces ranging from wind tunnel models to high speed jet engine compressor blades.
Neutral null models for diversity in serial transfer evolution experiments.
Harpak, Arbel; Sella, Guy
2014-09-01
Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Multi-Agent Market Modeling of Foreign Exchange Rates
NASA Astrophysics Data System (ADS)
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenboim, Gabriela; /Valencia U.; Lykken, Joseph D.
2006-08-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard {Lambda}CDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction ofmore » the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP-allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to {Lambda}CDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.« less
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
NASA Astrophysics Data System (ADS)
Barenboim, Gabriela; Lykken, Joseph D.
2006-12-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard ΛCDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction of the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to ΛCDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Learn about Physical Science: Simple Machines. [CD-ROM].
ERIC Educational Resources Information Center
2000
This CD-ROM, designed for students in grades K-2, explores the world of simple machines. It allows students to delve into the mechanical world and learn the ways in which simple machines make work easier. Animated demonstrations are provided of the lever, pulley, wheel, screw, wedge, and inclined plane. Activities include practical matching and…
Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment
NASA Astrophysics Data System (ADS)
Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz
2017-04-01
Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.
Simple explanations and reasoning: From philosophy of science to expert systems
NASA Technical Reports Server (NTRS)
Rochowiak, Daniel
1988-01-01
A preliminary prototype of a simple explanation system was constructed. Although the system, based on the idea of storytelling, did not incorporate all of the principles of simple explanation, it did demonstrate the potential of the approach. The system incorporated a hypertext system, an inference engine, and facilities for constructing contrast type explanations. The continued development of such a system should prove to be valuable. By extending the resources of the expert system paradigm, the knowledge engineer is not forced to learn a new set of skills, and the domain knowledge already acquired by him is not lost. Further, both the beginning user and the more advanced user can be accommodated. For the beginning user, corrective explanations and ES explanations provide facilities for more clearly understanding the way in which the system is functioning. For the more advanced user, the instance and state explanations allow him to focus on the issues at hand. The simple model of explanation attempts to exploit and show how the why and how facilities of the expert system paradigm can be extended by attending to the pragmatics of explanation and adding texture to the ordinary pattern of reasoning in a rule based system.
Natural electroweak breaking from a mirror symmetry.
Chacko, Z; Goh, Hock-Seng; Harnik, Roni
2006-06-16
We present "twin Higgs models," simple realizations of the Higgs boson as a pseudo Goldstone boson that protect the weak scale from radiative corrections up to scales of order 5-10 TeV. In the ultraviolet these theories have a discrete symmetry which interchanges each standard model particle with a corresponding particle which transforms under a twin or a mirror standard model gauge group. In addition, the Higgs sector respects an approximate global symmetry. When this global symmetry is broken, the discrete symmetry tightly constrains the form of corrections to the pseudo Goldstone Higgs potential, allowing natural electroweak symmetry breaking. Precision electroweak constraints are satisfied by construction. These models demonstrate that, contrary to the conventional wisdom, stabilizing the weak scale does not require new light particles charged under the standard model gauge groups.
Chan, H W; Unsworth, J
1989-01-01
A theoretical model is presented for combining parameters of 1-3 ultrasonic composite materials in order to predict ultrasonic characteristics such as velocity, acoustic impedance, electromechanical coupling factor, and piezoelectric coefficients. Hence, the model allows the estimation of resonance frequencies of 1-3 composite transducers. This model has been extended to cover more material parameters, and they are compared to experimental results up to PZT volume fraction nu of 0.8. The model covers calculation of piezoelectric charge constants d(33) and d(31). Values are found to be in good agreement with experimental results obtained for PZT 7A/Araldite D 1-3 composites. The acoustic velocity, acoustic impedance, and electromechanical coupling factor are predicted and found to be close to the values determined experimentally.
Mendyk, Aleksander; Güres, Sinan; Szlęk, Jakub; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies. PMID:26101544
Mendyk, Aleksander; Güres, Sinan; Jachowicz, Renata; Szlęk, Jakub; Polak, Sebastian; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Rojas, M.; Adamowski, J. F.; Anandaraja, N.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
While several well-validated crop growth models are currently widely used, very few crop pest models of the same caliber have been developed or applied, and pest models that take trophic interactions into account are even rarer. This may be due to several factors, including 1) the difficulty of representing complex agroecological food webs in a quantifiable model, and 2) the general belief that pesticides effectively remove insect pests from immediate concern. However, pests currently claim a substantial amount of harvests every year (and account for additional control costs), and the impact of insects and of their trophic interactions on agricultural crops cannot be ignored, especially in the context of changing climates and increasing pressures on crops across the globe. Unfortunately, most integrated pest management frameworks rely on very simple models (if at all), and most examples of successful agroecological management remain more anecdotal than scientifically replicable. In light of this, there is a need for validated and robust agroecological food web models that allow users to predict the response of these webs to changes in management, crops or climate, both in order to predict future pest problems under a changing climate as well as to develop effective integrated management plans. Here we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web agroecological models that predict pest dynamics in the field. The programme uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. We apply the model to the cononut black-headed caterpillar (Opisina arenosella) and associated parasitoid data from Sri Lanka, showing how the modeling framework can be used to rapidly develop, calibrate and validate models that elucidate how the internal structures of food webs determine their behaviour and allow users to evaluate different integrated management options.
NASA Astrophysics Data System (ADS)
Asinari, P.
2011-03-01
Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2015-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2014-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
A complex speciation–richness relationship in a simple neutral model
Desjardins-Proulx, Philippe; Gravel, Dominique
2012-01-01
Speciation is the “elephant in the room” of community ecology. As the ultimate source of biodiversity, its integration in ecology's theoretical corpus is necessary to understand community assembly. Yet, speciation is often completely ignored or stripped of its spatial dimension. Recent approaches based on network theory have allowed ecologists to effectively model complex landscapes. In this study, we use this framework to model allopatric and parapatric speciation in networks of communities. We focus on the relationship between speciation, richness, and the spatial structure of communities. We find a strong opposition between speciation and local richness, with speciation being more common in isolated communities and local richness being higher in more connected communities. Unlike previous models, we also find a transition to a positive relationship between speciation and local richness when dispersal is low and the number of communities is small. We use several measures of centrality to characterize the effect of network structure on diversity. The degree, the simplest measure of centrality, is the best predictor of local richness and speciation, although it loses some of its predictive power as connectivity grows. Our framework shows how a simple neutral model can be combined with network theory to reveal complex relationships between speciation, richness, and the spatial organization of populations. PMID:22957181
SIGKit: Software for Introductory Geophysics Toolkit
NASA Astrophysics Data System (ADS)
Kruse, S.; Bank, C. G.; Esmaeili, S.; Jazayeri, S.; Liu, S.; Stoikopoulos, N.
2017-12-01
The Software for Introductory Geophysics Toolkit (SIGKit) affords students the opportunity to create model data and perform simple processing of field data for various geophysical methods. SIGkit provides a graphical user interface built with the MATLAB programming language, but can run even without a MATLAB installation. At this time SIGkit allows students to pick first arrivals and match a two-layer model to seismic refraction data; grid total-field magnetic data, extract a profile, and compare this to a synthetic profile; and perform simple processing steps (subtraction of a mean trace, hyperbola fit) to ground-penetrating radar data. We also have preliminary tools for gravity, resistivity, and EM data representation and analysis. SIGkit is being built by students for students, and the intent of the toolkit is to provide an intuitive interface for simple data analysis and understanding of the methods, and act as an entrance to more sophisticated software. The toolkit has been used in introductory courses as well as field courses. First reactions from students are positive. Think-aloud observations of students using the toolkit have helped identify problems and helped shape it. We are planning to compare the learning outcomes of students who have used the toolkit in a field course to students in a previous course to test its effectiveness.
Reverse Aging of Composite Materials for Aeronautical Applications
NASA Astrophysics Data System (ADS)
lannone, Michele
2008-08-01
Hygro-thermal ageing of polymer matrix composite materials is a major issue for all the aeronautical structures. For carbon-epoxy composites generally used in aeronautical applications the major effect of ageing is the humidity absorption, which induces a plasticization effect, generally decreasing Tg and elastic moduli, and finally design allowables. A thermodynamical and kinetic study has been performed, aimed to establish a program of periodic heating of the composite part, able to reversing the ageing effect by inducing water desorption. The study was founded on a simple model based on Fick's law, coupled with a concept of "relative saturation coefficient" depending on the different temperature of the composite part and the environment. The behaviour of some structures exposed to humidity and "reverse aged" by heating has been virtually tested. The conclusion of the study allowed to issue a specific patent application for aeronautical structures to be designed on the basis of a "humidity free" concept which allows the use of higher design allowables; having as final results lighter composite structures with a simplified certification process.
Proton-deuteron double scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1974-01-01
A simple but accurate form for the proton-deuteron elastic double scattering amplitude, which includes both projectile and target recoil motion and is applicable at all momentum transfer, is derived by taking advantage of the restricted range of Fermi momentum allowed by the deuteron wave function. This amplitude can be directly compared to approximations which have neglected target recoil or are limited to small momentum transfer; the target recoil and large momentum transfer effects are evaluated explicitly within the context of a Gaussian model.
Feedforward operation of a lens setup for large defocus and astigmatism correction
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Almasian, MItra; Pozzi, Paolo; Bilderbeek, Rolf; Kalkman, Jeroen; Faber, Dirk J.; Verhaegen, Michel
2016-04-01
In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.
Single-Molecule Test for Markovianity of the Dynamics along a Reaction Coordinate.
Berezhkovskii, Alexander M; Makarov, Dmitrii E
2018-05-03
In an effort to answer the much-debated question of whether the time evolution of common experimental observables can be described as one-dimensional diffusion in the potential of mean force, we propose a simple criterion that allows one to test whether the Markov assumption is applicable to a single-molecule trajectory x( t). This test does not involve fitting of the data to any presupposed model and can be applied to experimental data with relatively low temporal resolution.
Building Flexible User Interfaces for Solving PDEs
NASA Astrophysics Data System (ADS)
Logg, Anders; Wells, Garth N.
2010-09-01
FEniCS is a collection of software tools for the automated solution of differential equations by finite element methods. In this note, we describe how FEniCS can be used to solve a simple nonlinear model problem with varying levels of automation. At one extreme, FEniCS provides tools for the fully automated and adaptive solution of nonlinear partial differential equations. At the other extreme, FEniCS provides a range of tools that allow the computational scientist to experiment with novel solution algorithms.
Self-teaching neural network learns difficult reactor control problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouse, W.C.
1989-01-01
A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Network model of project "Lean Production"
NASA Astrophysics Data System (ADS)
Khisamova, E. D.
2018-05-01
Economical production implies primarily new approaches to culture of management and organization of production and offers a set of tools and techniques that allows reducing losses significantly and making the process cheaper and faster. Economical production tools are simple solutions that allow one to see opportunities for improvement of all aspects of the business, to reduce losses significantly, to constantly improve the whole spectrum of business processes, to increase significantly the transparency and manageability of the organization, to take advantage of the potential of each employee of the company, to increase competitiveness, and to obtain significant economic benefits without making large financial expenditures. Each of economical production tools solves a specific part of the problems, and only application of their combination will allow one to solve the problem or minimize it to acceptable values. The research of the governance process project "Lean Production" permitted studying the methods and tools of lean production and developing measures for their improvement.
Detection of hepatocarcinoma in rats by integration of the fluorescence spectrum: Experimental model
NASA Astrophysics Data System (ADS)
Marcassa, J. C.; Ferreira, J.; Zucoloto, S.; Castro E Silva, O., Jr.; Marcassa, L. G.; Bagnato, V. S.
2006-05-01
The incorporation of spectroscopic techniques into diagnostic procedures may greatly improve the chances for precise diagnostics. One promising technique is fluorescence spectroscopy, which has recently been used to detect many different types of diseases. In this work, we use laser-induced tissue fluorescence to detect hepatocarcinoma in rats using excitation light at wavelengths of 443 and 532 nm. Hepatocarcinoma was induced chemically in Wistar rats. The collected fluorescence spectrum ranges from the excitation wavelength up to 850 nm. A mathematical procedure carried out on the spectrum determines a figure of merit value, which allows the detection of hepatocarcinoma. The figure of merit involves a procedure which evaluates the ratio between the backscattered excitation wavelength and the broad emission fluorescence band. We demonstrate that a normalization allowed by integration of the fluorescence spectra is a simple operation that may allow the detection of hepatocarcinoma.
Three-wave mixing in conjugated polymer solutions: Two-photon absorption in polydiacetylenes
NASA Astrophysics Data System (ADS)
Chance, R. R.; Shand, M. L.; Hogg, C.; Silbey, R.
1980-10-01
Three-wave-mixing spectroscopy is used to determine the dispersive and absorptive parts of a strongly allowed two-photon transition in a series of polydiacetylene solutions. The data analysis yields the energy, width, symmetry assignment, and oscillator strength for the two-photon transition. The data conclusively demonstrate that strong two-photon absorption is a fundamental property of the polydiacetylene backbone. The remarkably large two-photon absorption coefficients are explained by large oscillator strengths for both transitions involved in the two-photon absorption combined with strong one-photon resonance effects. The experimental results are shown to be consistent with a simple theoretical model for the energies and oscillator strengths of the one- and two-photon-allowed transitions.
Engineering design aspects of the heat-pipe power system
NASA Technical Reports Server (NTRS)
Capell, B. M.; Houts, M. G.; Poston, D. I.; Berte, M.
1997-01-01
The Heat-pipe Power System (HPS) is a near-term, low-cost space power system designed at Los Alamos that can provide up to 1,000 kWt for many space nuclear applications. The design of the reactor is simple, modular, and adaptable. The basic design allows for the use of a variety of power conversion systems and reactor materials (including the fuel, clad, and heat pipes). This paper describes a project that was undertaken to develop a database supporting many engineering aspects of the HPS design. The specific tasks discussed in this paper are: the development of an HPS materials database, the creation of finite element models that will allow a wide variety of investigations, and the verification of past calculations.
Lung Ultrasound in the Critically Ill Neonate
Lichtenstein, Daniel A; Mauriat, Philippe
2012-01-01
Critical ultrasound is a new tool for first-line physicians, including neonate intensivists. The consideration of the lung as one major target allows to redefine the priorities. Simple machines work better than up-to-date ones. We use a microconvex probe. Ten standardized signs allow a majority of uses: the bat sign (pleural line), lung sliding and the A-line (normal lung surface), the quad sign and sinusoid sign indicating pleural effusion regardless its echogenicity, the tissue-like sign and fractal sign indicating lung consolidation, the B-line artifact and lung rockets (indicating interstitial syndrome), abolished lung sliding with the stratosphere sign, suggesting pneumothorax, and the lung point, indicating pneumothorax. Other signs are used for more sophisticated applications (distinguishing atelectasis from pneumonia for instance...). All these disorders were assessed in the adult using CT as gold standard with sensitivity and specificity ranging from 90 to 100%, allowing to consider ultrasound as a reasonable bedside gold standard in the critically ill. The same signs are found, with no difference in the critically ill neonate. Fast protocols such as the BLUE-protocol are available, allowing immediate diagnosis of acute respiratory failure using seven standardized profiles. Pulmonary edema e.g. yields anterior lung rockets associated with lung sliding, making the B-profile. The FALLS-protocol, inserted in a Limited Investigation including a simple model of heart and vessels, assesses acute circulatory failure using lung artifacts. Interventional ultrasound (mainly, thoracocenthesis) provides maximal safety. Referrals to CT can be postponed. CEURF proposes personnalized bedside trainings since 1990. Lung ultrasound opens physicians to a visual medicine. PMID:23255876
Lung Ultrasound in the Critically Ill Neonate.
Lichtenstein, Daniel A; Mauriat, Philippe
2012-08-01
Critical ultrasound is a new tool for first-line physicians, including neonate intensivists. The consideration of the lung as one major target allows to redefine the priorities. Simple machines work better than up-to-date ones. We use a microconvex probe. Ten standardized signs allow a majority of uses: the bat sign (pleural line), lung sliding and the A-line (normal lung surface), the quad sign and sinusoid sign indicating pleural effusion regardless its echogenicity, the tissue-like sign and fractal sign indicating lung consolidation, the B-line artifact and lung rockets (indicating interstitial syndrome), abolished lung sliding with the stratosphere sign, suggesting pneumothorax, and the lung point, indicating pneumothorax. Other signs are used for more sophisticated applications (distinguishing atelectasis from pneumonia for instance...). All these disorders were assessed in the adult using CT as gold standard with sensitivity and specificity ranging from 90 to 100%, allowing to consider ultrasound as a reasonable bedside gold standard in the critically ill. The same signs are found, with no difference in the critically ill neonate. Fast protocols such as the BLUE-protocol are available, allowing immediate diagnosis of acute respiratory failure using seven standardized profiles. Pulmonary edema e.g. yields anterior lung rockets associated with lung sliding, making the B-profile. The FALLS-protocol, inserted in a Limited Investigation including a simple model of heart and vessels, assesses acute circulatory failure using lung artifacts. Interventional ultrasound (mainly, thoracocenthesis) provides maximal safety. Referrals to CT can be postponed. CEURF proposes personnalized bedside trainings since 1990. Lung ultrasound opens physicians to a visual medicine.
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.
A flowgraph model for bladder carcinoma
2014-01-01
Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066
Human mobility in a continuum approach.
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape.
Human Mobility in a Continuum Approach
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape. PMID:23555885
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passamai, V.; Saravia, L.
1997-05-01
Drying of red pepper under solar radiation was investigated, and a simple model related to water evaporation was developed. Drying experiments at constant laboratory conditions were undertaken where solar radiation was simulated by a 1,000 W lamp. In this first part of the work, water evaporation under radiation is studied and laboratory experiments are presented with two objectives: to verify Penman`s model of evaporation under radiation, and to validate the laboratory experiments. Modifying Penman`s model of evaporation by introducing two drying conductances as a function of water content, allows the development of a drying model under solar radiation. In themore » second part of this paper, the model is validated by applying it to red pepper open air solar drying experiments.« less
Unstable spiral modes in disk-shaped galaxies
Lau, Y. Y.; Lin, C. C.; Mark, James W.-K.
1976-01-01
The mechanisms for the maintenance and the excitation of trailing spiral modes of density waves in diskshaped galaxies, as proposed by Lin in 1969 and by Mark recently, are substantiated by an analysis of the gas-dynamical model of the galaxy. The self-excitation of the unstable mode in caused by waves propagating outwards from the corotation circle, which carry away angular momentum of a sign opposite to that contained in the wave system inside that circle. Specifically, a simple dispersion relationship is given as a definite integral, which allows the immediate determination of the pattern frequency and the amplification rate, once the basic galactic model is known. PMID:16592313
Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions
NASA Astrophysics Data System (ADS)
Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.
2016-06-01
In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.
Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrio, Roberto, E-mail: rbarrio@unizar.es; Serrano, Sergio; Angeles Martínez, M.
2014-06-01
We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis.
Hidden order and unconventional superconductivity in URu2Si2
NASA Astrophysics Data System (ADS)
Rau, Jeffrey; Kee, Hae-Young
2012-02-01
The nature of the so-called hidden order in URu2Si2 and the subsequent superconducting phase have remained a puzzle for over two decades. Motivated by evidence for rotational symmetry breaking seen in recent magnetic torque measurements [Okazaki et al. Science 331, 439 (2011)], we derive a simple tight-binding model consistent with experimental Fermi surface probes and ab-initio calculations. From this model we use mean-field theory to examine the variety of hidden orders allowed by existing experimental results, including the torque measurements. We then construct a phase diagram in temperature and pressure and discuss relevant experimental consequences.
IMAGINE: Interstellar MAGnetic field INference Engine
NASA Astrophysics Data System (ADS)
Steininger, Theo
2018-03-01
IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.
Modeling non-locality of plasmonic excitations with a fictitious film
NASA Astrophysics Data System (ADS)
Kong, Jiantao; Shvonski, Alexander; Kempa, Krzysztof
Non-local effects, requiring a wavevector (q) dependent dielectric response are becoming increasingly important in studies of plasmonic and metamaterial structures. The phenomenological hydrodynamic approximation (HDA) is the simplest, and most often used model, but it often fails. We show that the d-function formalism, exact to first order in q, is a powerful and simple-to-use alternative. Recently, we developed a mapping of the d-function formalism into a purely local fictitious film. This geometric mapping allows for non-local extensions of any local calculation scheme, including FDTD. We demonstrate here, that such mapped FDTD simulation of metallic nanoclusters agrees very well with various experiments.
Bell's Theorem and Einstein's "Spooky Actions" from a Simple Thought Experiment
ERIC Educational Resources Information Center
Kuttner, Fred; Rosenblum, Bruce
2010-01-01
In 1964 John Bell proved a theorem allowing the experimental test of whether what Einstein derided as "spooky actions at a distance" actually exist. We will see that they "do". Bell's theorem can be displayed with a simple, nonmathematical thought experiment suitable for a physics course at "any" level. And a simple, semi-classical derivation of…
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Oxide films state analysis by IR spectroscopy based on the simple oscillator approximation
NASA Astrophysics Data System (ADS)
Volkov, N. V.; Yakutkina, T. V.; Karpova, V. V.
2017-05-01
Stabilization of structure-phase state in a wide temperature range is one of the most important problems of improving properties of oxide compounds. As such, the search of new effective methods for obtaining metal oxides with desired physic-chemical, electro-physical and thermal properties and their control is important and relevant. The aim of this work is identification features state of the oxide films of some metals Be, Al, Fe, Cu, Zr on the metal surface of the polycrystalline samples by infrared spectroscopy. To identify the resonance emission bands the algorithm of IR-spectra processing was developed and realized on the basis of table processor EXCEL-2010, which allow revealing characteristic resonance bands successfully and identification of inorganic chemical compounds. In the frame of simple oscillator model, resonance frequencies of normal vibrations of water and some inorganic compounds: metal oxides - Be, Al, Fe, Cu, Zr were calculated and characteristic frequencies for different states (aggregate, deformation, phase) were specified. By means of IR-spectroscopy fundamental possibility of revealing oxides films on metal substrate features state is shown, that allow development and optimization of the technology for production of the oxide films with desired properties.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
ERIC Educational Resources Information Center
Sawicki, Charles A.
1996-01-01
Describes a simple, inexpensive system that allows students to have hands-on contact with simple experiments involving forces generated by induced currents. Discusses the use of a dynamic force sensor in making quantitative measurements of the forces generated. (JRH)
Can we close the Bohr-Einstein quantum debate?
NASA Astrophysics Data System (ADS)
Kupczynski, Marian
2017-10-01
Recent experiments allow one to conclude that Bell-type inequalities are indeed violated; thus, it is important to understand what this means and how we can explain the existence of strong correlations between outcomes of distant measurements. Do we have to announce that Einstein was wrong, Nature is non-local and non-local correlations are produced due to quantum magic and emerge, somehow, from outside space-time? Fortunately, such conclusions are unfounded because, if supplementary parameters describing measuring instruments are correctly incorporated in a theoretical model, then Bell-type inequalities may not be proved. We construct a simple probabilistic model allowing these correlations to be explained in a locally causal way. In our model, measurement outcomes are neither predetermined nor produced in an irreducibly random way. We explain why, contrary to the general belief, the introduction of setting-dependent parameters does not restrict experimenters' freedom of choice. Since the violation of Bell-type inequalities does not allow the conclusion that Nature is non-local and that quantum theory is complete, the Bohr-Einstein quantum debate may not be closed. The continuation of this debate is important not only for a better understanding of Nature but also for various practical applications of quantum phenomena. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Can we close the Bohr-Einstein quantum debate?
Kupczynski, Marian
2017-11-13
Recent experiments allow one to conclude that Bell-type inequalities are indeed violated; thus, it is important to understand what this means and how we can explain the existence of strong correlations between outcomes of distant measurements. Do we have to announce that Einstein was wrong, Nature is non-local and non-local correlations are produced due to quantum magic and emerge, somehow, from outside space-time? Fortunately, such conclusions are unfounded because, if supplementary parameters describing measuring instruments are correctly incorporated in a theoretical model, then Bell-type inequalities may not be proved. We construct a simple probabilistic model allowing these correlations to be explained in a locally causal way. In our model, measurement outcomes are neither predetermined nor produced in an irreducibly random way. We explain why, contrary to the general belief, the introduction of setting-dependent parameters does not restrict experimenters' freedom of choice. Since the violation of Bell-type inequalities does not allow the conclusion that Nature is non-local and that quantum theory is complete, the Bohr-Einstein quantum debate may not be closed. The continuation of this debate is important not only for a better understanding of Nature but also for various practical applications of quantum phenomena.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Fabrication of dielectric elastomer stack transducers (DEST) by liquid deposition modeling
NASA Astrophysics Data System (ADS)
Klug, Florian; Solano-Arana, Susana; Mößinger, Holger; Förster-Zügel, Florentine; Schlaak, Helmut F.
2017-04-01
Established fabrication methods for dielectric elastomer stack transducers (DEST) are mostly based on twodimensional thin-film technology. Because of this, DEST are based on simple two-dimensionally structured shapes. For certain applications, like valves or Braille displays, these structures are suited well enough. However, a more flexible fabrication method allows for more complex actuator designs, which would otherwise require extra processing steps. Fabrication methods with the possibility of three-dimensional structuring allow e.g. the integration of electrical connections, cavities, channels, sensor and other structural elements during the fabrication. This opens up new applications, as well as the opportunity for faster prototype production of individually designed DEST for a given application. In this work, a manufacturing system allowing three dimensional structuring is described. It enables the production of multilayer and three-dimensional structured DEST by liquid deposition modelling. The system is based on a custom made dual extruder, connected to a commercial threeaxis positioning system. It allows a computer controlled liquid deposition of two materials. After tuning the manufacturing parameters the production of thin layers with at thickness of less than 50 μm, as well as stacking electrode and dielectric materials is feasible. With this setup a first DEST with dielectric layer thickness less than 50 μm is build successfully and its performance is evaluated.
Hernández, Oscar E; Zurek, Eduardo E
2013-05-15
We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.
Shell models of magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Plunian, Franck; Stepanov, Rodion; Frick, Peter
2013-02-01
Shell models of hydrodynamic turbulence originated in the seventies. Their main aim was to describe the statistics of homogeneous and isotropic turbulence in spectral space, using a simple set of ordinary differential equations. In the eighties, shell models of magnetohydrodynamic (MHD) turbulence emerged based on the same principles as their hydrodynamic counter-part but also incorporating interactions between magnetic and velocity fields. In recent years, significant improvements have been made such as the inclusion of non-local interactions and appropriate definitions for helicities. Though shell models cannot account for the spatial complexity of MHD turbulence, their dynamics are not over simplified and do reflect those of real MHD turbulence including intermittency or chaotic reversals of large-scale modes. Furthermore, these models use realistic values for dimensionless parameters (high kinetic and magnetic Reynolds numbers, low or high magnetic Prandtl number) allowing extended inertial range and accurate dissipation rate. Using modern computers it is difficult to attain an inertial range of three decades with direct numerical simulations, whereas eight are possible using shell models. In this review we set up a general mathematical framework allowing the description of any MHD shell model. The variety of the latter, with their advantages and weaknesses, is introduced. Finally we consider a number of applications, dealing with free-decaying MHD turbulence, dynamo action, Alfvén waves and the Hall effect.
NASA Astrophysics Data System (ADS)
Zaccone, Alessio; Gentili, Daniele; Wu, Hua; Morbidelli, Massimo
2010-04-01
The aggregation of interacting Brownian particles in sheared concentrated suspensions is an important issue in colloid and soft matter science per se. Also, it serves as a model to understand biochemical reactions occurring in vivo where both crowding and shear play an important role. We present an effective medium approach within the Smoluchowski equation with shear which allows one to calculate the encounter kinetics through a potential barrier under shear at arbitrary colloid concentrations. Experiments on a model colloidal system in simple shear flow support the validity of the model in the concentration range considered. By generalizing Kramers' rate theory to the presence of shear and collective hydrodynamics, our model explains the significant increase in the shear-induced reaction-limited aggregation kinetics upon increasing the colloid concentration.
Value of the distant future: Model-independent results
NASA Astrophysics Data System (ADS)
Katz, Yuri A.
2017-01-01
This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
Tracing the Rationale Behind UML Model Change Through Argumentation
NASA Astrophysics Data System (ADS)
Jureta, Ivan J.; Faulkner, Stéphane
Neglecting traceability—i.e., the ability to describe and follow the life of a requirement—is known to entail misunderstanding and miscommunication, leading to the engineering of poor quality systems. Following the simple principles that (a) changes to UML model instances ought be justified to the stakeholders, (b) justification should proceed in a structured manner to ensure rigor in discussions, critique, and revisions of model instances, and (c) the concept of argument instantiated in a justification process ought to be well defined and understood, the present paper introduces the UML Traceability through Argumentation Method (UML-TAM) to enable the traceability of design rationale in UML while allowing the appropriateness of model changes to be checked by analysis of the structure of the arguments provided to justify such changes.
NASA Astrophysics Data System (ADS)
Li, Jie; Zippilli, Stefano; Zhang, Jing; Vitali, David
2016-05-01
Collapse models postulate the existence of intrinsic noise which modifies quantum mechanics and is responsible for the emergence of macroscopic classicality. Assessing the validity of these models is extremely challenging because it is nontrivial to discriminate unambiguously their presence in experiments where other hardly controllable sources of noise compete to the overall decoherence. Here we provide a simple procedure that is able to probe the hypothetical presence of the collapse noise with a levitated nanosphere in a Fabry-Pérot cavity. We show that the stationary state of the system is particularly sensitive, under specific experimental conditions, to the interplay between the trapping frequency, the cavity size, and the momentum diffusion induced by the collapse models, allowing one to detect them even in the presence of standard environmental noises.
NASA Astrophysics Data System (ADS)
Mazilu, Traian
2010-09-01
This paper herein describes the interaction between a simple moving vehicle and an infinite periodically supported rail, in order to signalise the basic features of the vehicle/track vibration behaviour in general, and wheel/rail vibration, in particular. The rail is modelled as an infinite Timoshenko beam resting on semi-sleepers via three-directional rail pads and ballast. The time-domain analysis was performed applying Green's matrix of the track method. This method allows taking into account the nonlinearities of the wheel/rail contact and the Doppler effect. The numerical analysis is dedicated to the wheel/rail response due to two types of excitation: the steady-state interaction and rail irregularities. The study points out to certain aspects regarding the parametric resonance, the amplitude-modulated vibration due to corrugation and the Doppler effect.
Analysis and design of a standardized control module for switching regulators
NASA Astrophysics Data System (ADS)
Lee, F. C.; Mahmoud, M. F.; Yu, Y.; Kolecki, J. C.
1982-07-01
Three basic switching regulators: buck, boost, and buck/boost, employing a multiloop standardized control module (SCM) were characterized by a common small signal block diagram. Employing the unified model, regulator performances such as stability, audiosusceptibility, output impedance, and step load transient are analyzed and key performance indexes are expressed in simple analytical forms. More importantly, the performance characteristics of all three regulators are shown to enjoy common properties due to the unique SCM control scheme which nullifies the positive zero and provides adaptive compensation to the moving poles of the boost and buck/boost converters. This allows a simple unified design procedure to be devised for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt.
NASA Astrophysics Data System (ADS)
George, D. S.; Onischenko, A.; Holmes, A. S.
2004-03-01
Focused laser ablation by single laser pulses at varying angles of incidence is studied in two materials of interest: a solgel (Ormocer 4) and a polymer (SU8). For a range of angles (up to 70° from normal), and for low-energy (<20 μJ), 40 ns pulses at 266 nm wavelength, the ablation depth along the direction of the incident laser beam is found to be independent of the angle of incidence. This allows the crater profiles at oblique incidence to be generated directly from the crater profiles at normal incidence by a simple coordinate transformation. This result is of use in the development of simulation tools for direct-write laser ablation. A simple model based on the moving ablation front approach is shown to be consistent with the observed behavior.
Samiee, K. T.; Foquet, M.; Guo, L.; Cox, E. C.; Craighead, H. G.
2005-01-01
Fluorescence correlation spectroscopy (FCS) has demonstrated its utility for measuring transport properties and kinetics at low fluorophore concentrations. In this article, we demonstrate that simple optical nanostructures, known as zero-mode waveguides, can be used to significantly reduce the FCS observation volume. This, in turn, allows FCS to be applied to solutions with significantly higher fluorophore concentrations. We derive an empirical FCS model accounting for one-dimensional diffusion in a finite tube with a simple exponential observation profile. This technique is used to measure the oligomerization of the bacteriophage λ repressor protein at micromolar concentrations. The results agree with previous studies utilizing conventional techniques. Additionally, we demonstrate that the zero-mode waveguides can be used to assay biological activity by measuring changes in diffusion constant as a result of ligand binding. PMID:15613638
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Three-Dimensional Online Visualization and Engagement Tools for the Geosciences
NASA Astrophysics Data System (ADS)
Cockett, R.; Moran, T.; Pidlisecky, A.
2013-12-01
Educational tools often sacrifice interactivity in favour of scalability so they can reach more users. This compromise leads to tools that may be viewed as second tier when compared to more engaging activities performed in a laboratory; however, the resources required to deliver laboratory exercises that are scalable is often impractical. Geoscience education is well situated to benefit from interactive online learning tools that allow users to work in a 3D environment. Visible Geology (http://3ptscience.com/visiblegeology) is an innovative web-based application designed to enable visualization of geologic structures and processes through the use of interactive 3D models. The platform allows users to conceptualize difficult, yet important geologic principles in a scientifically accurate manner by developing unique geologic models. The environment allows students to interactively practice their visualization and interpretation skills by creating and interacting with their own models and terrains. Visible Geology has been designed from a user centric perspective resulting in a simple and intuitive interface. The platform directs students to build there own geologic models by adding beds and creating geologic events such as tilting, folding, or faulting. The level of ownership and interactivity encourages engagement, leading learners to discover geologic relationships on their own, in the context of guided assignments. In January 2013, an interactive geologic history assignment was developed for a 700-student introductory geology class at The University of British Columbia. The assignment required students to distinguish the relative age of geologic events to construct a geologic history. Traditionally this type of exercise has been taught through the use of simple geologic cross-sections showing crosscutting relationships; from these cross-sections students infer the relative age of geologic events. In contrast, the Visible Geology assignment offers students a unique experience where they first create their own geologic events allowing them to directly see how the timing of a geologic event manifests in the model and resulting cross-sections. By creating each geologic event in the model themselves, the students gain a deeper understanding of the processes and relative order of events. The resulting models can be shared amongst students, and provide instructors with a basis for guiding inquiry to address misconceptions. The ease of use of the assignment, including automatic assessment, made this tool practical for deployment in this 700 person class. The outcome of this type of large scale deployment is that students, who would normally not experience a lab exercise, gain exposure to interactive 3D thinking. Engaging tools and software that puts the user in control of their learning experiences is critical for moving to scalable, yet engaging, online learning environments.
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.
Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco
2018-01-01
Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Low-angle detachment origin for the Red Sea Rift System?
NASA Astrophysics Data System (ADS)
Voggenreiter, W.; Hötzl, H.; Mechie, J.
1988-07-01
The tectonic and magmatic history of the Jizan coastal plain (Tihama Asir, southwest Arabia) suggests a two-stage evolution. A first stage of extension began during the Oligocene and ended with uplift of the Arabian graben shoulder which began about 14 Ma ago. It was followed by a period of approximately 10 Ma characterized by magmatic and tectonic quiescence. A second stage of extension began roughly contemporaneously with the onset of seafloor spreading in the southern Red Sea some 4-5 Ma ago and is still active today. The geometry of faulting in the Jizan area supports a Wernicke model of simple shear for the development of the southern Red Sea. Regional asymmetries of the Red Sea area, such as the distribution of volcanism, the marginal topography and asymmetries in the geophysical signatures are consistent with such a model. Available seismic profiles allow a rough estimate for β-values of the Arabian Red Sea margin and were used to simulate subsidence history and heat flow of the Red Sea for "classical" two-layer stretching models. Neither finite uniform nor finite non-uniform stretching models can account for observed subsidence and heat flow data. Thus, two model scenarios of whole-lithosphere normal simple-shear are presented for the geological history of the southwestern Arabian margin of the Red Sea. These models are limited because of the Serravallian rearrangement in the kinematics of the Red Sea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Hwang, Ho-Ling
2007-01-01
This paper describes a development of national freight demand models for 27 industry sectors covered by the 2002 Commodity Flow Survey. It postulates that the national freight demands are consistent with U.S. business patterns. Furthermore, the study hypothesizes that the flow of goods, which make up the national production processes of industries, is coherent with the information described in the 2002 Annual Input-Output Accounts developed by the Bureau of Economic Analysis. The model estimation framework hinges largely on the assumption that a relatively simple relationship exists between freight production/consumption and business patterns for each industry defined by the three-digit Northmore » American Industry Classification System industry codes (NAICS). The national freight demand model for each selected industry sector consists of two models; a freight generation model and a freight attraction model. Thus, a total of 54 simple regression models were estimated under this study. Preliminary results indicated promising freight generation and freight attraction models. Among all models, only four of them had a R2 value lower than 0.70. With additional modeling efforts, these freight demand models could be enhanced to allow transportation analysts to assess regional economic impacts associated with temporary lost of transportation services on U.S. transportation network infrastructures. Using such freight demand models and available U.S. business forecasts, future national freight demands could be forecasted within certain degrees of accuracy. These freight demand models could also enable transportation analysts to further disaggregate the CFS state-level origin-destination tables to county or zip code level.« less
NASA Astrophysics Data System (ADS)
Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan
2017-11-01
Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.
The topology of card transaction money flows
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Papo, David; Romance, Miguel; Criado, Regino; Moral, Santiago
2016-11-01
Money flow models are essential tools to understand different economical phenomena, like saving propensities and wealth distributions. In spite of their importance, most of them are based on synthetic transaction networks with simple topologies, e.g. random or scale-free ones, as the characterisation of real networks is made difficult by the confidentiality and sensitivity of money transaction data. Here, we present an analysis of the topology created by real credit card transactions from one of the biggest world banks, and show how different distributions, e.g. number of transactions per card or amount, have nontrivial characteristics. We further describe a stochastic model to create transactions data sets, feeding from the obtained distributions, which will allow researchers to create more realistic money flow models.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
Milky Way Mass Models and MOND
NASA Astrophysics Data System (ADS)
McGaugh, Stacy S.
2008-08-01
Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.
A simple theory of molecular organization in fullerene-containing liquid crystals
NASA Astrophysics Data System (ADS)
Peroukidis, S. D.; Vanakaras, A. G.; Photinos, D. J.
2005-10-01
Systematic efforts to synthesize fullerene-containing liquid crystals have produced a variety of successful model compounds. We present a simple molecular theory, based on the interconverting shape approach [Vanakaras and Photinos, J. Mater. Chem. 15, 2002 (2005)], that relates the self-organization observed in these systems to their molecular structure. The interactions are modeled by dividing each molecule into a number of submolecular blocks to which specific interactions are assigned. Three types of blocks are introduced, corresponding to fullerene units, mesogenic units, and nonmesogenic linkage units. The blocks are constrained to move on a cubic three-dimensional lattice and molecular flexibility is allowed by retaining a number of representative conformations within the block representation of the molecule. Calculations are presented for a variety of molecular architectures including twin mesogenic branch monoadducts of C60, twin dendromesogenic branch monoadducts, and conical (badminton shuttlecock) multiadducts of C60. The dependence of the phase diagrams on the interaction parameters is explored. In spite of its many simplifications and the minimal molecular modeling used (three types of chemically distinct submolecular blocks with only repulsive interactions), the theory accounts remarkably well for the phase behavior of these systems.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
A simple model of solvent-induced symmetry-breaking charge transfer in excited quadrupolar molecules
NASA Astrophysics Data System (ADS)
Ivanov, Anatoly I.; Dereka, Bogdan; Vauthey, Eric
2017-04-01
A simple model has been developed to describe the symmetry-breaking of the electronic distribution of AL-D-AR type molecules in the excited state, where D is an electron donor and AL and AR are identical acceptors. The origin of this process is usually associated with the interaction between the molecule and the solvent polarization that stabilizes an asymmetric and dipolar state, with a larger charge transfer on one side than on the other. An additional symmetry-breaking mechanism involving the direct Coulomb interaction of the charges on the acceptors is proposed. At the same time, the electronic coupling between the two degenerate states, which correspond to the transferred charge being localised either on AL or AR, favours a quadrupolar excited state with equal amount of charge-transfer on both sides. Because of these counteracting effects, symmetry breaking is only feasible when the electronic coupling remains below a threshold value, which depends on the solvation energy and the Coulomb repulsion energy between the charges located on AL and AR. This model allows reproducing the solvent polarity dependence of the symmetry-breaking reported recently using time-resolved infrared spectroscopy.
New simple A{sub 4} neutrino model for nonzero {theta}{sub 13} and large {delta}{sub CP}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishimori, Hajime
In a new simple application of the non-Abelian discrete symmetry A{sub 4} to charged-lepton and neutrino mass matrices, we show that for the current experimental central value of sin{sup 2} 2{theta}{sub 13} Asymptotically-Equal-To 0.1, leptonic CP violation is necessarily large, i.e. Double-Vertical-Line tan{delta}{sub CP} Double-Vertical-Line > 1.3. We also consider T{sub 7} model with one parameter to be complex, thus allowing for one Dirac CP phase {delta}{sub CP} and two Majorana CP phases {alpha}{sub 1,2}. We find a slight modification to this correlation as a function of {delta}{sub CP}. For a given set of input values of {Delta}m{sup 2}{sub 21},more » {Delta}m{sup 2}{sub 32}, {theta}{sub 12}, and {theta}{sub 13}, we obtain sin{sup 2} 2{theta}{sub 23} and m{sub ee} (the effective Majorana neutrino mass in neutrinoless double beta decay) as functions of tan {delta}{sub CP}. We find that the structure of this model always yields small Double-Vertical-Line tan {delta}{sub CP} Double-Vertical-Line .« less