Sample records for simple physically-based model

  1. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    PubMed

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  3. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  4. A simple physical model for forest fire spread

    Treesearch

    E. Koo; P. Pagni; J. Woycheese; S. Stephens; D. Weise; J. Huff

    2005-01-01

    Based on energy conservation and detailed heat transfer mechanisms, a simple physical model for fire spread is presented for the limit of one-dimensional steady-state contiguous spread of a line fire in a thermally-thin uniform porous fuel bed. The solution for the fire spread rate is found as an eigenvalue from this model with appropriate boundary conditions through a...

  5. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  6. Matter Gravitates, but Does Gravity Matter?

    ERIC Educational Resources Information Center

    Groetsch, C. W.

    2011-01-01

    The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…

  7. Analysis of pre-service physics teacher skills designing simple physics experiments based technology

    NASA Astrophysics Data System (ADS)

    Susilawati; Huda, C.; Kurniawan, W.; Masturi; Khoiri, N.

    2018-03-01

    Pre-service physics teacher skill in designing simple experiment set is very important in adding understanding of student concept and practicing scientific skill in laboratory. This study describes the skills of physics students in designing simple experiments based technologicall. The experimental design stages include simple tool design and sensor modification. The research method used is descriptive method with the number of research samples 25 students and 5 variations of simple physics experimental design. Based on the results of interviews and observations obtained the results of pre-service physics teacher skill analysis in designing simple experimental physics charged technology is good. Based on observation result, pre-service physics teacher skill in designing simple experiment is good while modification and sensor application are still not good. This suggests that pre-service physics teacher still need a lot of practice and do experiments in designing physics experiments using sensor modifications. Based on the interview result, it is found that students have high enough motivation to perform laboratory activities actively and students have high curiosity to be skilled at making simple practicum tool for physics experiment.

  8. Physical models of collective cell motility: from cell to tissue

    NASA Astrophysics Data System (ADS)

    Camley, B. A.; Rappel, W.-J.

    2017-03-01

    In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.

  9. A Computational Efficient Physics Based Methodology for Modeling Ceramic Matrix Composites (Preprint)

    DTIC Science & Technology

    2011-11-01

    elastic range, and with some simple forms of progressing damage . However, a general physics-based methodology to assess the initial and lifetime... damage evolution in the RVE for all possible load histories. Microstructural data on initial configuration and damage progression in CMCs were...the damaged elements will have changed, hence, a progressive damage model. The crack opening for each crack type in each element is stored as a

  10. Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect

    NASA Astrophysics Data System (ADS)

    Wilson, Derrek J.; Gea-Banacloche, Julio

    2012-04-01

    We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.

  11. Developing model asphalt systems using molecular simulation : final model.

    DOT National Transportation Integrated Search

    2009-09-01

    Computer based molecular simulations have been used towards developing simple mixture compositions whose : physical properties resemble those of real asphalts. First, Monte Carlo simulations with the OPLS all-atom force : field were used to predict t...

  12. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  13. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  14. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  15. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    PubMed

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  17. Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.

    PubMed

    Goodin, Christopher

    2013-05-01

    The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.

  18. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  19. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    PubMed

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  20. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  1. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.

  2. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.

  3. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  4. Investigating the Effect of Damage Progression Model Choice on Prognostics Performance

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2011-01-01

    The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.

  5. A new physically-based model considered antecedent rainfall for shallow landslide

    NASA Astrophysics Data System (ADS)

    Luo, Yu; He, Siming

    2017-04-01

    Rainfall is the most significant factor to cause landslide especially shallow landslide. In previous studies, rainfall intensity and duration are take part in the physically based model to determining the occurrence of the rainfall-induced landslides, but seldom considered the antecedent rainfall. In this study, antecedent rainfall is took into account to derive a new physically based model for shallow landslides prone area predicting at the basin scale. Based on the Rosso's equation of seepage flow considering the antecedent rainfall to construct the hillslope hydrology model. And then, the infinite slope stability theory is using to construct the slope stability model. At last, the model is apply in the Baisha river basin of Chengdu, Sichuan, China, and the results are compared with the one's from unconsidered antecedent rainfall. The results show that the model is simple, but has the capability of consider antecedent rainfall in the triggering mechanism of shallow landslide. Meanwhile, antecedent rainfall can make an obvious effect on shallow landslides, so in shallow landslide hazard assessment, the influence of the antecedent rainfall can't be ignored.

  6. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  7. Family practitioners' diagnostic decision-making processes regarding patients with respiratory tract infections: an observational study.

    PubMed

    Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva

    2008-01-01

    The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).

  8. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  9. Nondeducibility-Based Analysis of Cyber-Physical Systems

    NASA Astrophysics Data System (ADS)

    Gamage, Thoshitha; McMillin, Bruce

    Controlling information flow in a cyber-physical system (CPS) is challenging because cyber domain decisions and actions manifest themselves as visible changes in the physical domain. This paper presents a nondeducibility-based observability analysis for CPSs. In many CPSs, the capacity of a low-level (LL) observer to deduce high-level (HL) actions ranges from limited to none. However, a collaborative set of observers strategically located in a network may be able to deduce all the HL actions. This paper models a distributed power electronics control device network using a simple DC circuit in order to understand the effect of multiple observers in a CPS. The analysis reveals that the number of observers required to deduce all the HL actions in a system increases linearly with the number of configurable units. A simple definition of nondeducibility based on the uniqueness of low-level projections is also presented. This definition is used to show that a system with two security domain levels could be considered “nondeducibility secure” if no unique LL projections exist.

  10. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  11. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    NASA Astrophysics Data System (ADS)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  12. A simple quantum mechanical treatment of scattering in nanoscale transistors

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.

    2003-05-01

    We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.

  13. Cloud fluid models of gas dynamics and star formation in galaxies

    NASA Technical Reports Server (NTRS)

    Struck-Marcell, Curtis; Scalo, John M.; Appleton, P. N.

    1987-01-01

    The large dynamic range of star formation in galaxies, and the apparently complex environmental influences involved in triggering or suppressing star formation, challenges the understanding. The key to this understanding may be the detailed study of simple physical models for the dominant nonlinear interactions in interstellar cloud systems. One such model is described, a generalized Oort model cloud fluid, and two simple applications of it are explored. The first of these is the relaxation of an isolated volume of cloud fluid following a disturbance. Though very idealized, this closed box study suggests a physical mechanism for starbursts, which is based on the approximate commensurability of massive cloud lifetimes and cloud collisional growth times. The second application is to the modeling of colliding ring galaxies. In this case, the driving processes operating on a dynamical timescale interact with the local cloud processes operating on the above timescale. The results is a variety of interesting nonequilibrium behaviors, including spatial variations of star formation that do not depend monotonically on gas density.

  14. Simple Harmonics Motion experiment based on LabVIEW interface for Arduino

    NASA Astrophysics Data System (ADS)

    Tong-on, Anusorn; Saphet, Parinya; Thepnurat, Meechai

    2017-09-01

    In this work, we developed an affordable modern innovative physics lab apparatus. The ultrasonic sensor is used to measure the position of a mass attached on a spring as a function of time. The data acquisition system and control device were developed based on LabVIEW interface for Arduino UNO R3. The experiment was designed to explain wave propagation which is modeled by simple harmonic motion. The simple harmonic system (mass and spring) was observed and the motion can be realized using curve fitting to the wave equation in Mathematica. We found that the spring constants provided by Hooke’s law and the wave equation fit are 9.9402 and 9.1706 N/m, respectively.

  15. A charge-based model of Junction Barrier Schottky rectifiers

    NASA Astrophysics Data System (ADS)

    Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali

    2018-06-01

    A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.

  16. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  17. An unexpected way forward: towards a more accurate and rigorous protein-protein binding affinity scoring function by eliminating terms from an already simple scoring function.

    PubMed

    Swanson, Jon; Audie, Joseph

    2018-01-01

    A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.

  18. Physics-Based Modeling of Electric Operation, Heat Transfer, and Scrap Melting in an AC Electric Arc Furnace

    NASA Astrophysics Data System (ADS)

    Opitz, Florian; Treffinger, Peter

    2016-04-01

    Electric arc furnaces (EAF) are complex industrial plants whose actual behavior depends upon numerous factors. Due to its energy intensive operation, the EAF process has always been subject to optimization efforts. For these reasons, several models have been proposed in literature to analyze and predict different modes of operation. Most of these models focused on the processes inside the vessel itself. The present paper introduces a dynamic, physics-based model of a complete EAF plant which consists of the four subsystems vessel, electric system, electrode regulation, and off-gas system. Furthermore the solid phase is not treated to be homogenous but a simple spatial discretization is employed. Hence it is possible to simulate the energy input by electric arcs and fossil fuel burners depending on the state of the melting progress. The model is implemented in object-oriented, equation-based language Modelica. The simulation results are compared to literature data.

  19. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  20. Nonequilibrium Langevin dynamics: A demonstration study of shear flow fluctuations in a simple fluid

    NASA Astrophysics Data System (ADS)

    Belousov, Roman; Cohen, E. G. D.; Rondoni, Lamberto

    2017-08-01

    The present paper is based on a recent success of the second-order stochastic fluctuation theory in describing time autocorrelations of equilibrium and nonequilibrium physical systems. In particular, it was shown to yield values of the related deterministic parameters of the Langevin equation for a Couette flow in a microscopic molecular dynamics model of a simple fluid. In this paper we find all the remaining constants of the stochastic dynamics, which then is simulated numerically and compared directly with the original physical system. By using these data, we study in detail the accuracy and precision of a second-order Langevin model for nonequilibrium physical systems theoretically and computationally. We find an intriguing relation between an applied external force and cumulants of the resulting flow fluctuations. This is characterized by a linear dependence of an athermal cumulant ratio, an apposite quantity introduced here. In addition, we discuss how the order of a given Langevin dynamics can be raised systematically by introducing colored noise.

  1. Development of vehicle model test-bending of a simple structural surfaces model for automotive vehicle sedan

    NASA Astrophysics Data System (ADS)

    Nor, M. K. Mohd; Noordin, A.; Ruzali, M. F. S.; Hussen, M. H.; Mustapa@Othman, N.

    2017-04-01

    Simple Structural Surfaces (SSS) method is offered as a means of organizing the process for rationalizing the basic vehicle body structure load paths. The application of this simplified approach is highly beneficial in the development of modern passenger car structure design. In Malaysia, the SSS topic has been widely adopted and seems compulsory in various automotive programs related to automotive vehicle structures in many higher education institutions. However, there is no real physical model of SSS available to gain considerable insight and understanding into the function of each major subassembly in the whole vehicle structures. Based on this motivation, a real physical SSS of sedan model and the corresponding model vehicle tests of bending is proposed in this work. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results prove that the proposed vehicle model test is useful to physically demonstrate the importance of providing continuous load path using the necessary structural components within the vehicle structures. It is clearly observed that the global bending stiffness reduce significantly when more panels are removed from the complete SSS model. The analysis shows the front parcel shelf is an important subassembly to sustain bending load.

  2. Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology

    PubMed Central

    Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.

    2012-01-01

    We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733

  3. Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins

    NASA Astrophysics Data System (ADS)

    Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter

    2017-10-01

    Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.

  4. A pore-pressure diffusion model for estimating landslide-inducing rainfall

    USGS Publications Warehouse

    Reid, M.E.

    1994-01-01

    Many types of landslide movement are induced by large rainstorms, and empirical rainfall intensity/duration thresholds for initiating movement have been determined for various parts of the world. In this paper, I present a simple pressure diffusion model that provides a physically based hydrologic link between rainfall intensity/duration at the ground surface and destabilizing pore-water pressures at depth. The model approximates rainfall infiltration as a sinusoidally varying flux over time and uses physical parameters that can be determined independently. Using a comprehensive data set from an intensively monitored landslide, I demonstrate that the model is capable of distinguishing movement-inducing rainstorms. -Author

  5. Generalized fractional diffusion equations for subdiffusion in arbitrarily growing domains

    NASA Astrophysics Data System (ADS)

    Angstmann, C. N.; Henry, B. I.; McGann, A. V.

    2017-10-01

    The ubiquity of subdiffusive transport in physical and biological systems has led to intensive efforts to provide robust theoretical models for this phenomena. These models often involve fractional derivatives. The important physical extension of this work to processes occurring in growing materials has proven highly nontrivial. Here we derive evolution equations for modeling subdiffusive transport in a growing medium. The derivation is based on a continuous-time random walk. The concise formulation of these evolution equations requires the introduction of a new, comoving, fractional derivative. The implementation of the evolution equation is illustrated with a simple model of subdiffusing proteins in a growing membrane.

  6. Experimental and numerical modeling of shrub crown fire initiation

    Treesearch

    Watcharapong Tachajapong; Jesse Lozano; Shakar Mahalingam; Xiangyang Zhou; David Weise

    2009-01-01

    The transition of fire from dry surface fuels to wet shrub crown fuels was studied using laboratory experiments and a simple physical model to gain a better understanding of the transition process. In the experiments, we investigated the effects of varying vertical distances between surface and crown fuels (crown base height), and of the wind speed on crown fire...

  7. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  8. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  9. Patterns of Response Times and Response Choices to Science Questions: The Influence of Relative Processing Time

    ERIC Educational Resources Information Center

    Heckler, Andrew F.; Scaife, Thomas M.

    2015-01-01

    We report on five experiments investigating response choices and response times to simple science questions that evoke student "misconceptions," and we construct a simple model to explain the patterns of response choices. Physics students were asked to compare a physical quantity represented by the slope, such as speed, on simple physics…

  10. A Physically Based Coupled Chemical and Physical Weathering Model for Simulating Soilscape Evolution

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Welivitiya, D.; Hancock, G. R.

    2015-12-01

    A critical missing link in existing landscape evolution models is a dynamic soil evolution models where soils co-evolve with the landform. Work by the authors over the last decade has demonstrated a computationally manageable model for soil profile evolution (soilscape evolution) based on physical weathering. For chemical weathering it is clear that full geochemistry models such as CrunchFlow and PHREEQC are too computationally intensive to be couplable to existing soilscape and landscape evolution models. This paper presents a simplification of CrunchFlow chemistry and physics that makes the task feasible, and generalises it for hillslope geomorphology applications. Results from this simplified model will be compared with field data for soil pedogenesis. Other researchers have previously proposed a number of very simple weathering functions (e.g. exponential, humped, reverse exponential) as conceptual models of the in-profile weathering process. The paper will show that all of these functions are possible for specific combinations of in-soil environmental, geochemical and geologic conditions, and the presentation will outline the key variables controlling which of these conceptual models can be realistic models of in-profile processes and under what conditions. The presentation will finish by discussing the coupling of this model with a physical weathering model, and will show sample results from our SSSPAM soilscape evolution model to illustrate the implications of including chemical weathering in the soilscape evolution model.

  11. Teaching Einsteinian physics at schools: part 1, models and analogies for relativity

    NASA Astrophysics Data System (ADS)

    Kaur, Tejinder; Blair, David; Moschilla, John; Stannard, Warren; Zadnik, Marjan

    2017-11-01

    The Einstein-First project aims to change the paradigm of school science teaching through the introduction of modern Einsteinian concepts of space and time, gravity and quanta at an early age. These concepts are rarely taught to school students despite their central importance to modern science and technology. The key to implementing the Einstein-First curriculum is the development of appropriate models and analogies. This paper is the first part of a three-paper series. It presents the conceptual foundation of our approach, based on simple physical models and analogies, followed by a detailed description of the models and analogies used to teach concepts of general and special relativity. Two accompanying papers address the teaching of quantum physics (Part 2) and research outcomes (Part 3).

  12. Action at a Distance in the Cell's Nucleus

    NASA Astrophysics Data System (ADS)

    Kondev, Jane

    Various functions performed by chromosomes involve long-range communication between DNA sequences that are tens of thousands of bases apart along the genome, and microns apart in the nucleus. In this talk I will discuss experiments and theory relating to two distinct modes of long-range communication in the nucleus, chromosome looping and protein hopping along the chromosome, both in the context of DNA-break repair in yeast. Yeast is an excellent model system for studies that link chromosome conformations to their function as there is ample experimental evidence that yeast chromosome conformations are well described by a simple, random-walk polymer model. Using a combination of polymer physics theory and experiments on yeast cells, I will demonstrate that loss of polymer entropy due to chromosome looping is the driving force for homology search during repair of broken DNA by homologous recombination. I will also discuss the spread of histone modifications along the chromosome and away from the DNA break point in the context of simple physics models based on chromosome looping and kinase hopping, and show how combining physics theory and cell-biology experiment can be used to dissect the molecular mechanism of the spreading process. These examples demonstrate how combined theoretical and experimental studies can reveal physical principles of long-range communication in the nucleus, which play important roles in regulation of gene expression, DNA recombination, and chromatin modification. This work was supported by the NSF DMR-1206146.

  13. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  14. An investigation of crown fuel bulk density effects on the dynamics of crown fire initiation in shrublands

    Treesearch

    Watcharapong Tachajapong; Jesse Lozano; Shankar Mahalingam; Xiangyang Zhou; David R. Weise

    2008-01-01

    Crown fire initiation is studied by using a simple experimental and detailed physical modeling based on Large Eddy Simulation (LES). Experiments conducted thus far reveal that crown fuel ignition via surface fire occurs when the crown base is within the continuous flame region and does not occur when the crown base is located in the hot plume gas region of the surface...

  15. Development of Computer-Based Experiment Set on Simple Harmonic Motion of Mass on Springs

    ERIC Educational Resources Information Center

    Musik, Panjit

    2017-01-01

    The development of computer-based experiment set has become necessary in teaching physics in schools so that students can learn from their real experiences. The purpose of this study is to create and to develop the computer-based experiment set on simple harmonic motion of mass on springs for teaching and learning physics. The average period of…

  16. Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior

    NASA Astrophysics Data System (ADS)

    Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco

    2015-02-01

    We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.

  17. A Comprehensive Physical Impedance Model of Polymer Electrolyte Fuel Cell Cathodes in Oxygen-free Atmosphere.

    PubMed

    Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill

    2018-03-21

    Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.

  18. Health monitoring system for transmission shafts based on adaptive parameter identification

    NASA Astrophysics Data System (ADS)

    Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.

    2018-05-01

    A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip LaRoche

    At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less

  20. Synapse fits neuron: joint reduction by model inversion.

    PubMed

    van der Scheer, H T; Doelman, A

    2017-08-01

    In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.

  1. Management system of simple rental flats study based on technical aspect and health in Medan city

    NASA Astrophysics Data System (ADS)

    Novrial; Indra Cahaya, S.

    2018-03-01

    Medan city is a metropolis city in Sumatera that has slums area. Simple rental flats have been built to overcome the problem. However the preliminary survey result showed that the physical and non-physical environment management of simple rent flats is very bad. This study conducted in 3 simple rent flats. It has observed the simple rent flats environment and has interviewed occupants and related agencies. Results of conducted research showed the occupant’s characteristics based on the largest percentage are Javanese; last education is senior high with self-employed work with average income Rp 1,000,000 – Rp 2,500,000. Waste retribution submitted to their cleanliness except for Amplas simple rent flats, their waste management system does not manage properly and the garbage littered. The number of family members of Wisma Labuhan and Amplas simple rent flats exceeds the regulation number of occupants, so it is crowded and noisy. Physical conditions of Amplas simple rent flats are bad, septic tank is full and are not vacuumed. Clean water sources derived from wells and artesian wll are vulnerable to be contaminated by pollutants such as leachate and bad quality water. It is necessary to improve the physical, basic sanitation, and guidance for the simple rent flats occupants to the management system of Simple Rent Flats.

  2. Model for intensity calculation in electron guns

    NASA Astrophysics Data System (ADS)

    Doyen, O.; De Conto, J. M.; Garnier, J. P.; Lefort, M.; Richard, N.

    2007-04-01

    The calculation of the current in an electron gun structure is one of the main investigations involved in the electron gun physics understanding. In particular, various simulation codes exist but often present some important discrepancies with experiments. Moreover, those differences cannot be reduced because of the lack of physical information in these codes. We present a simple physical three-dimensional model, valid for all kinds of gun geometries. This model presents a better precision than all the other simulation codes and models encountered and allows the real understanding of the electron gun physics. It is based only on the calculation of the Laplace electric field at the cathode, the use of the classical Child-Langmuir's current density, and a geometrical correction to this law. Finally, the intensity versus voltage characteristic curve can be precisely described with only a few physical parameters. Indeed, we have showed that only the shape of the electric field at the cathode without beam, and a distance of an equivalent infinite planar diode gap, govern mainly the electron gun current generation.

  3. Tuning a physically-based model of the air-sea gas transfer velocity

    NASA Astrophysics Data System (ADS)

    Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.

    Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.

  4. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.

  5. Simulation of snow and soil water content as a basis for satellite retrievals

    USDA-ARS?s Scientific Manuscript database

    It is not yet possible to determine whether the snow has changed over time despite collection of passive microwave data for more than thirty years. Physically-based, but computationally simple snow and soil models have been coupled to form the basis of a data assimilation system for retrievals of sn...

  6. Tree Hydraulics: How Sap Rises

    ERIC Educational Resources Information Center

    Denny, Mark

    2012-01-01

    Trees transport water from roots to crown--a height that can exceed 100 m. The physics of tree hydraulics can be conveyed with simple fluid dynamics based upon the Hagen-Poiseuille equation and Murray's law. Here the conduit structure is modelled as conical pipes and as branching pipes. The force required to lift sap is generated mostly by…

  7. Rocket Engine Oscillation Diagnostics

    NASA Technical Reports Server (NTRS)

    Nesman, Tom; Turner, James E. (Technical Monitor)

    2002-01-01

    Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.

  8. CADDIS Volume 2. Sources, Stressors and Responses: Simple and Detailed Conceptual Model Diagram Downloads

    EPA Pesticide Factsheets

    Simple and detailed conceptual model diagram and associated narrative for ammonia, dissolved oxygen, flow alteration, herbicides, insecticides, ionic strength, metals, nutrients, ph, physical habitat, sediments, temperature, unspecified toxic chemicals.

  9. Representing ductile damage with the dual domain material point method

    DOE PAGES

    Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...

    2015-12-14

    In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less

  10. Discrete-time modelling of musical instruments

    NASA Astrophysics Data System (ADS)

    Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.

  11. Measurement of Pressure Responses in a Physical Model of a Human Head with High Shape Fidelity Based on Ct/mri Data

    NASA Astrophysics Data System (ADS)

    Miyazaki, Yusuke; Tachiya, Hiroshi; Anata, Kenji; Hojo, Akihiro

    This study discusses a head injury mechanism in case of a human head subjected to impact, from results of impact experiments by using a physical model of a human head with high-shape fidelity. The physical model was constructed by using rapid prototyping technology from the three-dimensional CAD data, which obtained from CT/MRI images of a subject's head. As results of the experiments, positive pressure responses occurred at the impacted site, whereas negative pressure responses occurred at opposite the impacted site. Moreover, the absolute maximum value of pressure occurring at the frontal region of the intracranial space of the head model resulted in same or higher than that at the occipital site in each case that the impact force was imposed on frontal or occipital region. This result has not been showed in other study using simple shape physical models. And, the result corresponds with clinical evidences that brain contusion mainly occurs at the frontal part in each impact direction. Thus, physical model with accurate skull shape is needed to clarify the mechanism of brain contusion.

  12. Thermal stability of static coronal loops: Part 1: Effects of boundary conditions

    NASA Technical Reports Server (NTRS)

    Antiochos, S. K.; Shoub, E. C.; An, C. H.; Emslie, A. G.

    1985-01-01

    The linear stability of static coronal-loop models undergoing thermal perturbations was investigated. The effect of conditions at the loop base on the stability properties of the models was considered in detail. The question of appropriate boundary conditions at the loop base was considered and it was concluded that the most physical assumptions are that the temperature and density (or pressure) perturbations vanish there. However, if the base is taken to be sufficiently deep in the chromosphere, either several chromospheric scale heights or several coronal loop lengths in depth, then the effect of the boundary conditions on loop stability becomes negligible so that all physically acceptable conditions are equally appropriate. For example, one could as well assume that the velocity vanishes at the base. The growth rates and eigenmodes of static models in which gravity is neglected and in which the coronal heating is a relatively simple function, either constant per-unit mass or per-unit volume were calculated. It was found that all such models are unstable with a growth rate of the order of the coronal cooling time. The physical implications of these results for the solar corona and transition region are discussed.

  13. Asymptotic formulae for likelihood-based tests of new physics

    NASA Astrophysics Data System (ADS)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  14. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    NASA Astrophysics Data System (ADS)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snelson, C. M., Chipman, V. D., White, R. L., Emmitt, R. F., Townsend, M. J., Barker, D., Lee, P.

    Understanding the changes in seismic energy as it travels from the near field to the far field is the ultimate goal in monitoring for explosive events of interest. This requires a clear understanding of explosion phenomenology as it relates to seismic, infrasound, and acoustic signals. Although there has been much progress in modeling these phenomena, this has been primarily based in the empirical realm. As a result, the logical next step in advancing the seismic monitoring capability of the United States is to conduct field tests that can expand the predictive capability of the physics-based modeling currently under development. Themore » Source Physics Experiment at the Nevada National Security Site (SPE-N) is the first step in this endeavor to link the empirically based with the physics-based modeling. This is a collaborative project between National Security Technologies (NSTec), Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), the Defense Threat Reduction Agency (DTRA), and the Air Force Technical Applications Center (AFTAC). The test series require both the simple and complex cases to fully characterize the problem, which is to understand the transition of seismic energy from the near field to the far field; to understand the development of S-waves in explosives sources; and how anisotropy controls seismic energy transmission and partitioning. The current series is being conducted in a granite body called the Climax Stock. This location was chosen for several reasons, including the fairly homogenous granite; the location of previous nuclear tests in the same rock body; and generally the geology has been well characterized. The simple geology series is planned for 7 shots using conventional explosives in the same shot hole surrounded by Continuous Reflectometry for Radius vs. Time Experiment (CORRTEX), Time of Arrival (TOA), Velocity of Detonation (VOD), down-hole accelerometers, surface accelerometers, infrasound, and a suite of seismic sensors of various frequency bands from the near field to the far field. This allows for the use of a single test bed in the simple geology case instead of multiple tests beds to obtain the same results. The shots are planned at various depths to obtain a Green’s function, scaled-depth of burial data, nominal depth of burial data and damage zone data. SPE1-N was conducted in May 2011 as a 220 lb (100 kg) TNT equivalent calibration shot at a depth of 180 ft (55 m). SPE2-N was conducted in October 2011 as a 2200 lb (1000 kg) TNT equivalent calibration shot at a depth of 150 ft (46 m). SPE3-N was conducted in July 2012 as a 2200 lb (1000 kg) TNT equivalent calibration shot at a depth of 150 ft (46 m) in the damaged zone. Over 400 data channels were recorded for each of these shots and data recovery was about 95% with high signal to noise ratio. Once the simple geology site data has been utilized, a new test bed will be developed in a complex geology site to test these physics based models. Ultimately, the results from this project will provide the next advances in the science of monitoring to enable a physics-based predicative capability.« less

  16. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  17. GRIPs (Group Investigation Problems) for Introductory Physics

    NASA Astrophysics Data System (ADS)

    Moore, Thomas A.

    2006-12-01

    GRIPs lie somewhere between homework problems and simple labs: they are open-ended questions that require a mixture of problem-solving skills and hands-on experimentation to solve practical puzzles involving simple physical objects. In this talk, I will describe three GRIPs that I developed for a first-semester introductory calculus-based physics course based on the "Six Ideas That Shaped Physics" text. I will discuss the design of the three GRIPs we used this past fall, our experience in working with students on these problems, and students' response as reported on course evaluations.

  18. The Source Physics Experiments (SPE) at the Nevada National Security Site (NNSS): An Overview

    NASA Astrophysics Data System (ADS)

    Snelson, C. M.; Chipman, V.; White, R. L.; Emmitt, R.; Townsend, M.; Barker, D.; Lee, P.

    2012-12-01

    Understanding the changes in seismic energy as it travels from the near field to the far field is the ultimate goal in monitoring for explosive events of interest. This requires a clear understanding of explosion phenomenology as it relates to seismic, infrasound, and acoustic signals. Although there has been much progress in modeling these phenomena, this has been primarily based in the empirical realm. As a result, the logical next step in advancing the seismic monitoring capability of the United States is to conduct field tests that can expand the predictive capability of the physics-based modeling currently under development. The Source Physics Experiment at the Nevada National Security Site (SPE) is the first step in this endeavor to link the empirically based with the physics-based modeling. This is a collaborative project between National Security Technologies (NSTec), Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), the Defense Threat Reduction Agency (DTRA), and the Air Force Technical Applications Center (AFTAC). The test series require both the simple and complex cases to fully characterize the problem, which is to understand the transition of seismic energy from the near field to the far field; to understand the development of S-waves in explosives sources; and how anisotropy controls seismic energy transmission and partitioning. The current series is being conducted in a granite body called the Climax Stock. This location was chosen for several reasons, including the fairly homogenous granite; the location of previous nuclear tests in the same rock body; and generally the geology has been well characterized. The simple geology series is planned for 7 shots using conventional explosives in the same shot hole surrounded by Continuous Reflectometry for Radius vs. Time Experiment (CORRTEX), Time of Arrival (TOA), Velocity of Detonation (VOD), down-hole accelerometers, surface accelerometers, infrasound, and a suite of seismic sensors of various frequency bands from the near field to the far field. This allows for the use of a single test bed in the simple geology case instead of multiple tests beds to obtain the same results. The shots are planned at various depths to obtain a Green's function, scaled-depth of burial data, nominal depth of burial data and damage zone data. SPE1 was conducted in May 2011 as a 220 lb (100 kg) TNT equivalent calibration shot at a depth of 180 ft (55 m). SPE2 was conducted in October 2011 as a 2200 lb (1000 kg) TNT equivalent calibration shot at a depth of 150 ft (46 m). SPE3 was conducted in July 2012 as a 2200 lb (1000 kg) TNT equivalent calibration shot at a depth of 150 ft (46 m) in the damaged zone. Over 400 data channels were recorded for each of these shots and data recovery was about 95% with high signal to noise ratio. Once the simple geology site data has been utilized, a new test bed will be developed in a complex geology site to test these physics based models. Ultimately, the results from this project will provide the next advances in the science of monitoring to enable a physics-based predicative capability. This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946--1584

  19. Photoresist and stochastic modeling

    NASA Astrophysics Data System (ADS)

    Hansen, Steven G.

    2018-01-01

    Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.

  20. Material model for physically based rendering

    NASA Astrophysics Data System (ADS)

    Robart, Mathieu; Paulin, Mathias; Caubet, Rene

    1999-09-01

    In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.

  1. Rocket exhaust ground cloud/atmospheric interactions

    NASA Technical Reports Server (NTRS)

    Hwang, B.; Gould, R. K.

    1978-01-01

    An attempt to identify and minimize the uncertainties and potential inaccuracies of the NASA Multilayer Diffusion Model (MDM) is performed using data from selected Titan 3 launches. The study is based on detailed parametric calculations using the MDM code and a comparative study of several other diffusion models, the NASA measurements, and the MDM. The results are discussed and evaluated. In addition, the physical/chemical processes taking place during the rocket cloud rise are analyzed. The exhaust properties and the deluge water effects are evaluated. A time-dependent model for two aerosol coagulations is developed and documented. Calculations using this model for dry deposition during cloud rise are made. A simple model for calculating physical properties such as temperature and air mass entrainment during cloud rise is also developed and incorporated with the aerosol model.

  2. NREL: Renewable Resource Data Center - SMARTS

    Science.gov Websites

    SMARTS - Simple Model of the Atmospheric Radiative Transfer of Sunshine Renewable Resource Data Center The Simple Model of the Atmospheric Radiative Transfer of Sunshine, or SMARTS, predicts clear-sky architecture, atmospheric science, photobiology, and health physics. SMARTS is a complex model that requires

  3. Quantifying the Effect of Soil Water Repellency on Infiltration Parameters Using a Dry Sand

    NASA Astrophysics Data System (ADS)

    Shillito, R.; Berli, M.; Ghezzehei, T. A.; Kaminski, E.

    2017-12-01

    Water infiltration into less than perfectly wettable soils has usually been considered an exceptional case—in fact, it may be the rule. Infiltration into soils exhibiting some degree of water repellency has important implications in agricultural irrigation, post-fire runoff, golf course and landscape management, and spill and contaminant mitigation. Beginning from fundamental principles, we developed a physically-based model to quantify the effect of water repellency on infiltration parameters. Experimentally, we used a dry silica sand and treated it to achieve various known degrees of water repellency. The model was verified using data gathered from multiple upward infiltration (wicking) experiments using the treated sand. The model also allowed us to explore the effect of initial soil moisture conditions on infiltration into water-repellent soils, and the physical interpretation of the simple water drop penetration time test. These results provide a fundamental step in the physically-based understanding of how water infiltrates into a less than perfectly wettable porous media.

  4. Measuring memory with the order of fractional derivative

    NASA Astrophysics Data System (ADS)

    Du, Maolin; Wang, Zaihua; Hu, Haiyan

    2013-12-01

    Fractional derivative has a history as long as that of classical calculus, but it is much less popular than it should be. What is the physical meaning of fractional derivative? This is still an open problem. In modeling various memory phenomena, we observe that a memory process usually consists of two stages. One is short with permanent retention, and the other is governed by a simple model of fractional derivative. With the numerical least square method, we show that the fractional model perfectly fits the test data of memory phenomena in different disciplines, not only in mechanics, but also in biology and psychology. Based on this model, we find that a physical meaning of the fractional order is an index of memory.

  5. A Simple Model of Global Aerosol Indirect Effects

    NASA Technical Reports Server (NTRS)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  6. A Geostationary Earth Orbit Satellite Model Using Easy Java Simulation

    ERIC Educational Resources Information Center

    Wee, Loo Kang; Goh, Giam Hwee

    2013-01-01

    We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic…

  7. Rainfall thresholds for the initiation of debris flows at La Honda, California

    USGS Publications Warehouse

    Wilson, R.C.; Wieczorek, G.F.

    1995-01-01

    A simple numerical model, based on the physical analogy of a leaky barrel, can simulate significant features of the interaction between rainfall and shallow-hillslope pore pressures. The leaky-barrel-model threshold is consistent with, but slightly higher than, an earlier, purely empirical, threshold. The number of debris flows triggered by a storm can be related to the time and amount by which the leaky-barrel-model response exceeded the threshold during the storm. -from Authors

  8. A simple integrated assessment approach to global change simulation and evaluation

    NASA Astrophysics Data System (ADS)

    Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael

    2016-04-01

    We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.

  9. Simple model dielectric functions for insulators

    NASA Astrophysics Data System (ADS)

    Vos, Maarten; Grande, Pedro L.

    2017-05-01

    The Drude dielectric function is a simple way of describing the dielectric function of free electron materials, which have an uniform electron density, in a classical way. The Mermin dielectric function describes a free electron gas, but is based on quantum physics. More complex metals have varying electron densities and are often described by a sum of Drude dielectric functions, the weight of each function being taken proportional to the volume with the corresponding density. Here we describe a slight variation on the Drude dielectric functions that describes insulators in a semi-classical way and a form of the Levine-Louie dielectric function including a relaxation time that does the same within the framework of quantum physics. In the optical limit the semi-classical description of an insulator and the quantum physics description coincide, in the same way as the Drude and Mermin dielectric function coincide in the optical limit for metals. There is a simple relation between the coefficients used in the classical and quantum approaches, a relation that ensures that the obtained dielectric function corresponds to the right static refractive index. For water we give a comparison of the model dielectric function at non-zero momentum with inelastic X-ray measurements, both at relative small momenta and in the Compton limit. The Levine-Louie dielectric function including a relaxation time describes the spectra at small momentum quite well, but in the Compton limit there are significant deviations.

  10. A Simple Mechanical Model for the Isotropic Harmonic Oscillator

    ERIC Educational Resources Information Center

    Nita, Gelu M.

    2010-01-01

    A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels. (Contains 2 figures.)

  11. Implementing Computer Based Laboratories

    NASA Astrophysics Data System (ADS)

    Peterson, David

    2001-11-01

    Physics students at Francis Marion University will complete several required laboratory exercises utilizing computer-based Vernier probes. The simple pendulum, the acceleration due to gravity, simple harmonic motion, radioactive half lives, and radiation inverse square law experiments will be incorporated into calculus-based and algebra-based physics courses. Assessment of student learning and faculty satisfaction will be carried out by surveys and test results. Cost effectiveness and time effectiveness assessments will be presented. Majors in Computational Physics, Health Physics, Engineering, Chemistry, Mathematics and Biology take these courses, and assessments will be categorized by major. To enhance the computer skills of students enrolled in the courses, MAPLE will be used for further analysis of the data acquired during the experiments. Assessment of these enhancement exercises will also be presented.

  12. Examining the Crossover from the Hadronic to Partonic Phase in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Mingmei; Yu Meiling; Liu Lianshou

    2008-03-07

    A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.

  13. Accuracy analysis of automodel solutions for Lévy flight-based transport: from resonance radiative transfer to a simple general model

    NASA Astrophysics Data System (ADS)

    Kukushkin, A. B.; Sdvizhenskii, P. A.

    2017-12-01

    The results of accuracy analysis of automodel solutions for Lévy flight-based transport on a uniform background are presented. These approximate solutions have been obtained for Green’s function of the following equations: the non-stationary Biberman-Holstein equation for three-dimensional (3D) radiative transfer in plasma and gases, for various (Doppler, Lorentz, Voigt and Holtsmark) spectral line shapes, and the 1D transport equation with a simple longtailed step-length probability distribution function with various power-law exponents. The results suggest the possibility of substantial extension of the developed method of automodel solution to other fields far beyond physics.

  14. A simple physical model for X-ray burst sources

    NASA Technical Reports Server (NTRS)

    Joss, P. C.; Rappaport, S.

    1977-01-01

    In connection with information considered by Illarianov and Sunyaev (1975) and van den Heuvel (1975), a simple physical model for an X-ray burst source in the galactic disk is proposed. The model includes an unevolved OB star with a relatively weak stellar wind and a compact object in a close binary system. For some reason, the stellar wind from the OB star is unable to accrete steadily on to the compact object. When the stellar wind is sufficiently weak, the compact object accretes irregularly, leading to X-ray bursts.

  15. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  16. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.

  17. Exploring Physics with Computer Animation and PhysGL

    NASA Astrophysics Data System (ADS)

    Bensky, T. J.

    2016-10-01

    This book shows how the web-based PhysGL programming environment (http://physgl.org) can be used to teach and learn elementary mechanics (physics) using simple coding exercises. The book's theme is that the lessons encountered in such a course can be used to generate physics-based animations, providing students with compelling and self-made visuals to aid their learning. Topics presented are parallel to those found in a traditional physics text, making for straightforward integration into a typical lecture-based physics course. Users will appreciate the ease at which compelling OpenGL-based graphics and animations can be produced using PhysGL, as well as its clean, simple language constructs. The author argues that coding should be a standard part of lower-division STEM courses, and provides many anecdotal experiences and observations, that include observed benefits of the coding work.

  18. Modeling of dynamic effects of a low power laser beam

    NASA Technical Reports Server (NTRS)

    Lawrence, George N.; Scholl, Marija S.; Khatib, AL

    1988-01-01

    Methods of modeling some of the dynamic effects involved in laser beam propagation through the atmosphere are addressed with emphasis on the development of simple but accurate models which are readily implemented in a physical optics code. A space relay system with a ground based laser facility is considered as an example. The modeling of such characteristic phenomena as laser output distribution, flat and curved mirrors, diffraction propagation, atmospheric effects (aberration and wind shear), adaptive mirrors, jitter, and time integration of power on target, is discussed.

  19. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  20. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  1. SIGNUM: A Matlab, TIN-based landscape evolution model

    NASA Astrophysics Data System (ADS)

    Refice, A.; Giachetta, E.; Capolongo, D.

    2012-08-01

    Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.

  2. A Ball Pool Model to Illustrate Higgs Physics to the Public

    ERIC Educational Resources Information Center

    Organtini, Giovanni

    2017-01-01

    A simple model is presented to explain Higgs boson physics to the grand public. The model consists of a children's ball pool representing a Universe filled with a certain amount of the Higgs field. The model is suitable for usage as a hands-on tool in scientific exhibits and provides a clear explanation of almost all the aspects of the physics of…

  3. Soil mechanics: breaking ground.

    PubMed

    Einav, Itai

    2007-12-15

    In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.

  4. Mathematical prediction of core body temperature from environment, activity, and clothing: The heat strain decision aid (HSDA).

    PubMed

    Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W

    2017-02-01

    Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.

  5. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  6. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  7. Physics in the Ionosphere.

    ERIC Educational Resources Information Center

    Murket, A. J.

    1979-01-01

    Develops a simple model of radio wave propagation and illustrates how basic physical concepts such as refractive index, refraction, reflection and dispersion can be applied to a situation normally not met in introductory physics courses. (Author/GA)

  8. Expectations for inflationary observables: simple or natural?

    NASA Astrophysics Data System (ADS)

    Musoke, Nathan; Easther, Richard

    2017-12-01

    We describe the general inflationary dynamics that can arise with a single, canonically coupled field where the inflaton potential is a 4-th order polynomial. This scenario yields a wide range of combinations of the empirical spectral observables, ns, r and αs. However, not all combinations are possible and next-generation cosmological experiments have the ability to rule out all inflationary scenarios based on this potential. Further, we construct inflationary priors for this potential based on physically motivated choices for its free parameters. These can be used to determine the degree of tuning associated with different combinations of ns, r and αs and will facilitate treatments of the inflationary model selection problem. Finally, we comment on the implications of these results for the naturalness of the overall inflationary paradigm. We argue that ruling out all simple, renormalizable potentials would not necessarily imply that the inflationary paradigm itself was unnatural, but that this eventuality would increase the importance of building inflationary scenarios in the context of broader paradigms of ultra-high energy physics.

  9. Linear stiff string vibrations in musical acoustics: Assessment and comparison of models.

    PubMed

    Ducceschi, Michele; Bilbao, Stefan

    2016-10-01

    Strings are amongst the most common elements found in musical instruments and an appropriate physical description of string dynamics is essential to modelling, analysis, and simulation. For linear vibration in a single polarisation, the most common model is based on the Euler-Bernoulli beam equation under tension. In spite of its simple form, such a model gives unbounded phase and group velocities at large wavenumbers, and such behaviour may be interpreted as unphysical. The Timoshenko model has, therefore, been employed in more recent works to overcome such shortcoming. This paper presents a third model based on the shear beam equations. The three models are here assessed and compared with regard to the perceptual considerations in musical acoustics.

  10. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  11. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  12. Anticipatory Cognitive Systems: a Theoretical Model

    NASA Astrophysics Data System (ADS)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  13. Experimental Control of Simple Pendulum Model

    ERIC Educational Resources Information Center

    Medina, C.

    2004-01-01

    This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…

  14. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice.

    PubMed

    Darrah, Johanna; Loomis, Joan; Manns, Patricia; Norton, Barbara; May, Laura

    2006-11-01

    The Department of Physical Therapy, University of Alberta, Edmonton, Alberta, Canada, recently implemented a Master of Physical Therapy (MPT) entry-level degree program. As part of the curriculum design, two models were developed, a Model of Best Practice and the Clinical Decision-Making Model. Both models incorporate four key concepts of the new curriculum: 1) the concept that theory, research, and clinical practice are interdependent and inform each other; 2) the importance of client-centered practice; 3) the terminology and philosophical framework of the World Health Organization's International Classification of Functioning, Disability, and Health; and 4) the importance of evidence-based practice. In this article the general purposes of models for learning are described; the two models developed for the MPT program are described; and examples of their use with curriculum design and teaching are provided. Our experiences with both the development and use of models of practice have been positive. The models have provided both faculty and students with a simple, systematic structured framework to organize teaching and learning in the MPT program.

  15. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  16. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  17. Effects of capillarity and microtopography on wetland specific yield

    USGS Publications Warehouse

    Sumner, D.M.

    2007-01-01

    Hydrologic models aid in describing water flows and levels in wetlands. Frequently, these models use a specific yield conceptualization to relate water flows to water level changes. Traditionally, a simple conceptualization of specific yield is used, composed of two constant values for above- and below-surface water levels and neglecting the effects of soil capillarity and land surface microtopography. The effects of capiltarity and microtopography on specific yield were evaluated at three wetland sites in the Florida Everglades. The effect of capillarity on specific yield was incorporated based on the fillable pore space within a soil moisture profile at hydrostatic equilibrium with the water table. The effect of microtopography was based on areal averaging of topographically varying values of specific yield. The results indicate that a more physically-based conceptualization of specific yield incorporating capillary and microtopographic considerations can be substantially different from the traditional two-part conceptualization, and from simpler conceptualizations incorporating only capillarity or only microtopography. For the sites considered, traditional estimates of specific yield could under- or overestimate the more physically based estimates by a factor of two or more. The results suggest that consideration of both capillarity and microtopography is important to the formulation of specific yield in physically based hydrologic models of wetlands. ?? 2007, The Society of Wetland Scientists.

  18. Gain degradation and amplitude scintillation due to tropospheric turbulence

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1978-01-01

    It is shown that a simple physical model is adequate for the prediction of the long term statistics of both the reduced signal levels and increased peak-to-peak fluctuations. The model is based on conventional atmospheric turbulence theory and incorporates both amplitude and angle of arrival fluctuations. This model predicts the average variance of signals observed under clear air conditions at low elevation angles on earth-space paths at 2, 7.3, 20 and 30 GHz. Design curves based on this model for gain degradation, realizable gain, amplitude fluctuation as a function of antenna aperture size, frequency, and either terrestrial path length or earth-space path elevation angle are presented.

  19. Physical activity promotion in business and industry: evidence, context, and recommendations for a national plan.

    PubMed

    Pronk, Nicolaas P

    2009-11-01

    The contemporary workplace setting is in need of interventions that effectively promote higher levels of occupational and habitual physical activity. It is the purpose of this paper to outline an evidence-based approach to promote physical activity in the business and industry sector in support of a National Physical Activity Plan. Comprehensive literature searches identified systematic reviews, comprehensive reviews, and consensus documents on the impact of physical activity interventions in the business and industry sector. A framework for action and priority recommendations for practice and research were generated. Comprehensive, multicomponent work-site programs that include physical activity components generate significant improvements in health, reduce absenteeism and sick leave, and can generate a positive financial return. Specific evidence-based physical activity interventions are presented. Recommendations for practice include implementing comprehensive, multicomponent programs that make physical activity interventions possible, simple, rewarding and relevant in the context of a social-ecological model. The business and industry sector has significant opportunities to improve physical activity among employees, their dependents, and the community at-large and to reap important benefits related to worker health and business performance.

  20. Physical Activity Promotion in Business and Industry: Evidence, Context, and Recommendations for a National Plan.

    PubMed

    Pronk, Nicolaas P

    2009-11-01

    The contemporary workplace setting is in need of interventions that effectively promote higher levels of occupational and habitual physical activity. It is the purpose of this paper to outline an evidence-based approach to promote physical activity in the business and industry sector in support of a National Physical Activity Plan. Comprehensive literature searches identified systematic reviews, comprehensive reviews, and consensus documents on the impact of physical activity interventions in the business and industry sector. A framework for action and priority recommendations for practice and research were generated. Comprehensive, multicomponent worksite programs that include physical activity components generate significant improvements in health, reduce absenteeism and sick leave, and can generate a positive financial return. Specific evidence-based physical activity interventions are presented. Recommendations for practice include implementing comprehensive, multicomponent programs that make physical activity interventions possible, simple, rewarding and relevant in the context of a social-ecological model. The business and industry sector has significant opportunities to improve physical activity among employees, their dependents, and the community at-large and to reap important benefits related to worker health and business performance.

  1. A geostationary Earth orbit satellite model using Easy Java Simulation

    NASA Astrophysics Data System (ADS)

    Wee, Loo Kang; Hwee Goh, Giam

    2013-01-01

    We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic 3D view and associated learning in the real world; (2) comparative visualization of permanent geostationary satellites; (3) examples of non-geostationary orbits of different rotation senses, periods and planes; and (4) an incorrect physics model for conceptual discourse. General feedback from the students has been relatively positive, and we hope teachers will find the computer model useful in their own classes.

  2. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  3. A Simplified Model for Detonation Based Pressure-Gain Combustors

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2010-01-01

    A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.

  4. VLP Simulation: An Interactive Simple Virtual Model to Encourage Geoscience Skill about Volcano

    NASA Astrophysics Data System (ADS)

    Hariyono, E.; Liliasari; Tjasyono, B.; Rosdiana, D.

    2017-09-01

    The purpose of this study was to describe physics students predicting skills after following the geoscience learning using VLP (Volcano Learning Project) simulation. This research was conducted to 24 physics students at one of the state university in East Java-Indonesia. The method used is the descriptive analysis based on students’ answers related to predicting skills about volcanic activity. The results showed that the learning by using VLP simulation was very potential to develop physics students predicting skills. Students were able to explain logically about volcanic activity and they have been able to predict the potential eruption that will occur based on the real data visualization. It can be concluded that the VLP simulation is very suitable for physics student requirements in developing geosciences skill and recommended as an alternative media to educate the society in an understanding of volcanic phenomena.

  5. Non-traditional Physics-based Inverse Approaches for Determining a Buried Object’s Location

    DTIC Science & Technology

    2008-09-01

    parameterization of its time-decay curve) in dipole models ( Pasion and Oldenburg, 2001) or the amplitudes of responding magnetic sources in the NSMS...commonly in use. According to the simple dipole model ( Pasion and Oldenburg, 2001), the secondary magnetic field due to the dipole m is 3 0 1 ˆ ˆ(3...Forum, St. Louis, MO. L. R. Pasion and D. W. Oldenburg (2001), “A discrimination algorithm for UXO using time domain electromagnetics.” J. Environ

  6. Non-compact Groups, Coherent States, Relativistic Wave Equations and the Harmonic Oscillator II: Physical and Geometrical Considerations

    NASA Astrophysics Data System (ADS)

    Cirilo-Lombardo, Diego Julio

    2009-04-01

    The physical meaning of the particularly simple non-degenerate supermetric, introduced in the previous part by the authors, is elucidated and the possible connection with processes of topological origin in high energy physics is analyzed and discussed. New possible mechanism of the localization of the fields in a particular sector of the supermanifold is proposed and the similarity and differences with a 5-dimensional warped model are shown. The relation with gauge theories of supergravity based in the OSP(1/4) group is explicitly given and the possible original action is presented. We also show that in this non-degenerate super-model the physic states, in contrast with the basic states, are observables and can be interpreted as tomographic projections or generalized representations of operators belonging to the metaplectic group Mp(2). The advantage of geometrical formulations based on non-degenerate super-manifolds over degenerate ones is pointed out and the description and the analysis of some interesting aspects of the simplest Riemannian superspaces are presented from the point of view of the possible vacuum solutions.

  7. An Emerging Role for Numerical Modelling in Wildfire Behavior Research: Explorations, Explanations, and Hypothesis Development

    NASA Astrophysics Data System (ADS)

    Linn, R.; Winterkamp, J.; Canfield, J.; Sauer, J.; Dupuy, J. L.; Finney, M.; Hoffman, C.; Parsons, R.; Pimont, F.; Sieg, C.; Forthofer, J.

    2014-12-01

    The human capacity for altering the water cycle has been well documented and given the expected change due to population, income growth, biofuels, climate, and associated land use change, there remains great uncertainty in both the degree of increased pressure on land and water resources and in our ability to adapt to these changes. Alleviating regional shortages in water supply can be carried out in a spatial hierarchy through i) direct trade of water between all regions, ii) development of infrastructure to improve water availability within regions (e.g. impounding rivers), iii) via inter-basin hydrological transfer between neighboring regions and, iv) via virtual water trade. These adaptation strategies can be managed via market trade in water and commodities to identify those strategies most likely to be adopted. This work combines the physically-based University of New Hampshire Water Balance Model (WBM) with the macro-scale Purdue University Simplified International Model of agricultural Prices Land use and the Environment (SIMPLE) to explore the interaction of supply and demand for fresh water globally. In this work we use a newly developed grid cell-based version of SIMPLE to achieve a more direct connection between the two modeling paradigms of physically-based models with optimization-driven approaches characteristic of economic models. We explore questions related to the global and regional impact of water scarcity and water surplus on the ability of regions to adapt to future change. Allowing for a variety of adaptation strategies such as direct trade of water and expanding the built water infrastructure, as well as indirect trade in commodities, will reduce overall global water stress and, in some regions, significantly reduce their vulnerability to these future changes.

  8. A student-centered approach for developing active learning: the construction of physical models as a teaching tool in medical physiology.

    PubMed

    Rezende-Filho, Flávio Moura; da Fonseca, Lucas José Sá; Nunes-Souza, Valéria; Guedes, Glaucevane da Silva; Rabelo, Luiza Antas

    2014-09-15

    Teaching physiology, a complex and constantly evolving subject, is not a simple task. A considerable body of knowledge about cognitive processes and teaching and learning methods has accumulated over the years, helping teachers to determine the most efficient way to teach, and highlighting student's active participation as a means to improve learning outcomes. In this context, this paper describes and qualitatively analyzes an experience of a student-centered teaching-learning methodology based on the construction of physiological-physical models, focusing on their possible application in the practice of teaching physiology. After having Physiology classes and revising the literature, students, divided in small groups, built physiological-physical models predominantly using low-cost materials, for studying different topics in Physiology. Groups were followed by monitors and guided by teachers during the whole process, finally presenting the results in a Symposium on Integrative Physiology. Along the proposed activities, students were capable of efficiently creating physiological-physical models (118 in total) highly representative of different physiological processes. The implementation of the proposal indicated that students successfully achieved active learning and meaningful learning in Physiology while addressing multiple learning styles. The proposed method has proved to be an attractive, accessible and relatively simple approach to facilitate the physiology teaching-learning process, while facing difficulties imposed by recent requirements, especially those relating to the use of experimental animals and professional training guidelines. Finally, students' active participation in the production of knowledge may result in a holistic education, and possibly, better professional practices.

  9. Rolling friction—models and experiment. An undergraduate student project

    NASA Astrophysics Data System (ADS)

    Vozdecký, L.; Bartoš, J.; Musilová, J.

    2014-09-01

    In this paper the rolling friction (rolling resistance) model is studied theoretically and experimentally in undergraduate level fundamental general physics courses. Rolling motions of a cylinder along horizontal or inclined planes are studied by simple experiments, measuring deformations of the underlay or of the rolling body. The rolling of a hard cylinder on a soft underlay as well as of a soft cylinder on a hard underlay is studied. The experimental data are treated by the open source software Tracker, appropriate for use at the undergraduate level of physics. Interpretation of results is based on elementary considerations comprehensible to university students—beginners. It appears that the commonly accepted model of rolling resistance based on the idea of a warp (little bulge) on the underlay in front of the rolling body does not correspond with experimental results even for the soft underlay and hard rolling body. The alternative model of the rolling resistance is suggested in agreement with experiment and the corresponding concept of the rolling resistance coefficient is presented. In addition to the obtained results we can conclude that the project can be used as a task for students in practical exercises of fundamental general physics undergraduate courses. Projects of similar type effectively contribute to the development of the physical thinking of students.

  10. Use of physically-based models and Soil Taxonomy to identify soil moisture classes: Problems and proposals

    NASA Astrophysics Data System (ADS)

    Bonfante, A.; Basile, A.; de Mascellis, R.; Manna, P.; Terribile, F.

    2009-04-01

    Soil classification according to Soil Taxonomy include, as fundamental feature, the estimation of soil moisture regime. The term soil moisture regime refers to the "presence or absence either of ground water or of water held at a tension of less than 1500 kPa in the soil or in specific horizons during periods of the year". In the classification procedure, defining of the soil moisture control section is the primary step in order to obtain the soil moisture regimes classification. Currently, the estimation of soil moisture regimes is carried out through simple calculation schemes, such as Newhall and Billaux models, and only in few cases some authors suggest the use of different more complex models (i.e., EPIC) In fact, in the Soil Taxonomy, the definition of the soil moisture control section is based on the wetting front position in two different conditions: the upper boundary is the depth to which a dry soil will be moistened by 2.5 cm of water within 24 hours and the lower boundary is the depth to which a dry soil will be moistened by 7.5 cm of water within 48 hours. Newhall, Billaux and EPIC models don't use physical laws to describe soil water flows, but they use a simple bucket-like scheme where the soil is divided into several compartments and water moves, instantly, only downward when the field capacity is achieved. On the other side, a large number of one-dimensional hydrological simulation models (SWAP, Cropsyst, Hydrus, MACRO, etc..) are available, tested and successfully used. The flow is simulated according to pressure head gradients through the numerical solution of the Richard's equation. These simulation models can be fruitful used to improve the study of soil moisture regimes. The aims of this work are: (i) analysis of the soil moisture control section concept by a physically based model (SWAP); (ii) comparison of the classification obtained in five different Italian pedoclimatic conditions (Mantova and Lodi in northern Italy; Salerno, Benevento and Caserta in southern Italy) applying the classical models (Newhall e Billaux) and the physically-based models (CropSyst e SWAP), The results have shown that the Soil Taxonomy scheme for the definition of the soil moisture regime is unrealistic for the considered Mediterranean soil hydrological conditions. In fact, the same classifications arise irrespective of the soil type. In this respect some suggestions on how modified the section control boundaries were formulated. Keywords: Soil moisture regimes, Newhall, Swap, Soil Taxonomy

  11. Hybrid modeling of nitrate fate in large catchments using fuzzy-rules

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Haberlandt, Uwe

    2010-05-01

    Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate simulation in large catchments, in particular concerning decision support. Fuzzy-rule based modeling enables the generation of simple, efficient, easily understandable models with nevertheless satisfactory accuracy for problems of decision support. The chosen approach encompasses a hybrid metamodeling, which includes the generation of fuzzy-rules with data originating from physically based models as well as a coupling with a physically based water balance model. For the generation of the needed training data and also as coupled water balance model the ecohydrological model SWAT is employed. The conceptual model divides the nitrate pathway into three parts. The first fuzzy-module calculates nitrate leaching with the percolating water from soil surface to groundwater, the second module simulates groundwater passage, and the final module replaces the in-stream processes. The aim of this modularization is to create flexibility for using each of the modules on its own, for changing or completely replacing it. For fuzzy-rule based modeling this can explicitly mean that the re-training of one of the modules with newly available data will be possible without problem, while the module assembly does not have to be modified. Apart from the concept of hybrid metamodeling first results are presented for the fuzzy-module for nitrate passage through the unsaturated zone.

  12. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  13. Low Order Modeling Tools for Preliminary Pressure Gain Combustion Benefits Analyses

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2012-01-01

    Pressure gain combustion (PGC) offers the promise of higher thermodynamic cycle efficiency and greater specific power in propulsion and power systems. This presentation describes a model, developed under a cooperative agreement between NASA and AFRL, for preliminarily assessing the performance enhancement and preliminary size requirements of PGC components either as stand-alone thrust producers or coupled with surrounding turbomachinery. The model is implemented in the Numerical Propulsion Simulation System (NPSS) environment allowing various configurations to be examined at numerous operating points. The validated model is simple, yet physics-based. It executes quickly in NPSS, yet produces realistic results.

  14. A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil

    NASA Astrophysics Data System (ADS)

    Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa

    2017-04-01

    Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous medium: Measurement and empirical representation. Soil Science Society of America Journal, 40(2), 203-207.

  15. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  16. Qualitative methods in quantum theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdal, A.B.

    The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less

  17. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  18. Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?

    USGS Publications Warehouse

    Geier, J.; Voss, C.I.; Dverstorp, B.

    2002-01-01

    A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.

  19. Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?

    USGS Publications Warehouse

    Geier, J.; Voss, C.I.; Dverstorp, B.

    2002-01-01

    A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.

  20. On nonlocally interacting metrics, and a simple proposal for cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Vardanyan, Valeri; Akrami, Yashar; Amendola, Luca; Silvestri, Alessandra

    2018-03-01

    We propose a simple, nonlocal modification to general relativity (GR) on large scales, which provides a model of late-time cosmic acceleration in the absence of the cosmological constant and with the same number of free parameters as in standard cosmology. The model is motivated by adding to the gravity sector an extra spin-2 field interacting nonlocally with the physical metric coupled to matter. The form of the nonlocal interaction is inspired by the simplest form of the Deser-Woodard (DW) model, α R1/squareR, with one of the Ricci scalars being replaced by a constant m2, and gravity is therefore modified in the infrared by adding a simple term of the form m21/squareR to the Einstein-Hilbert term. We study cosmic expansion histories, and demonstrate that the new model can provide background expansions consistent with observations if m is of the order of the Hubble expansion rate today, in contrast to the simple DW model with no viable cosmology. The model is best fit by w0~‑1.075 and wa~0.045. We also compare the cosmology of the model to that of Maggiore and Mancarella (MM), m2R1/square2R, and demonstrate that the viable cosmic histories follow the standard-model evolution more closely compared to the MM model. We further demonstrate that the proposed model possesses the same number of physical degrees of freedom as in GR. Finally, we discuss the appearance of ghosts in the local formulation of the model, and argue that they are unphysical and harmless to the theory, keeping the physical degrees of freedom healthy.

  1. Simple robot suggests physical interlimb communication is essential for quadruped walking

    PubMed Central

    Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio

    2013-01-01

    Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking. PMID:23097501

  2. Simple robot suggests physical interlimb communication is essential for quadruped walking.

    PubMed

    Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio

    2013-01-06

    Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking.

  3. Enhanced vacuum laser-impulse coupling by volume absorption at infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Phipps, C. R., Jr.; Harrison, R. F.; Shimada, T.; York, G. W.; Turner, R. F.

    1990-03-01

    This paper reports measurements of vacuum laser impulse coupling coefficients as large as 90 dyne/W, obtained with single microsec-duration CO2 laser pulses incident on a volume-absorbing, cellulose-nitrate-based plastic. This result is the largest coupling coefficient yet reported at any wavelength for a simple, planar target in vacuum, and partly results from expenditure of internal chemical energy in this material. Enhanced coupling was also observed in several other target materials that are chemically passive, but absorb light in depth at 10- and 3-micron wavelengths. The physical distinctions are discussed between this important case and that of simple, planar surface absorbers (such as metals) which were studied in the same experimental series, in light of the predictions of a simple theoretical model.

  4. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  5. A Simple Climate Model Program for High School Education

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2012-04-01

    The future climate change projections of the IPCC AR4 are based on GCM simulations, which give a distinct global warming pattern, with an arctic winter amplification, an equilibrium land sea contrast and an inter-hemispheric warming gradient. While these simulations are the most important tool of the IPCC predictions, the conceptual understanding of these predicted structures of climate change are very difficult to reach if only based on these highly complex GCM simulations and they are not accessible for ordinary people. In this study presented here we will introduce a very simple gridded globally resolved energy balance model based on strongly simplified physical processes, which is capable of simulating the main characteristics of global warming. The model shall give a bridge between the 1-dimensional energy balance models and the fully coupled 4-dimensional complex GCMs. It runs on standard PC computers computing globally resolved climate simulation with 2yrs per second or 100,000yrs per day. The program can compute typical global warming scenarios in a few minutes on a standard PC. The computer code is only 730 line long with very simple formulations that high school students should be able to understand. The simple model's climate sensitivity and the spatial structure of the warming pattern is within the uncertainties of the IPCC AR4 models simulations. It is capable of simulating the arctic winter amplification, the equilibrium land sea contrast and the inter-hemispheric warming gradient with good agreement to the IPCC AR4 models in amplitude and structure. The program can be used to do sensitivity studies in which students can change something (e.g. reduce the solar radiation, take away the clouds or make snow black) and see how it effects the climate or the climate response to changes in greenhouse gases. This program is available for every one and could be the basis for high school education. Partners for a high school project are wanted!

  6. Trade in water and commodities as adaptations to global change

    NASA Astrophysics Data System (ADS)

    Lammers, R. B.; Hertel, T. W.; Prousevitch, A.; Baldos, U. L. C.; Frolking, S. E.; Liu, J.; Grogan, D. S.

    2015-12-01

    The human capacity for altering the water cycle has been well documented and given the expected change due to population, income growth, biofuels, climate, and associated land use change, there remains great uncertainty in both the degree of increased pressure on land and water resources and in our ability to adapt to these changes. Alleviating regional shortages in water supply can be carried out in a spatial hierarchy through i) direct trade of water between all regions, ii) development of infrastructure to improve water availability within regions (e.g. impounding rivers), iii) via inter-basin hydrological transfer between neighboring regions and, iv) via virtual water trade. These adaptation strategies can be managed via market trade in water and commodities to identify those strategies most likely to be adopted. This work combines the physically-based University of New Hampshire Water Balance Model (WBM) with the macro-scale Purdue University Simplified International Model of agricultural Prices Land use and the Environment (SIMPLE) to explore the interaction of supply and demand for fresh water globally. In this work we use a newly developed grid cell-based version of SIMPLE to achieve a more direct connection between the two modeling paradigms of physically-based models with optimization-driven approaches characteristic of economic models. We explore questions related to the global and regional impact of water scarcity and water surplus on the ability of regions to adapt to future change. Allowing for a variety of adaptation strategies such as direct trade of water and expanding the built water infrastructure, as well as indirect trade in commodities, will reduce overall global water stress and, in some regions, significantly reduce their vulnerability to these future changes.

  7. A methodology for physically based rockfall hazard assessment

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.

    Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  8. Assessing opportunities for physical activity in the built environment of children: interrelation between kernel density and neighborhood scale.

    PubMed

    Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris

    2015-12-22

    Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.

  9. How to Make Our Models More Physically-based

    NASA Astrophysics Data System (ADS)

    Savenije, H. H. G.

    2016-12-01

    Models that are generally called "physically-based" unfortunately only have a partial view of the physical processes at play in hydrology. Although the coupled partial differential equations in these models reflect the water balance equations and the flow descriptors at laboratory scale, they miss essential characteristics of what determines the functioning of catchments. The most important active agent in catchments is the ecosystem (and sometimes people). What these agents do is manipulate the substrate in a way that it supports the essential functions of survival and productivity: infiltration of water, retention of moisture, mobilization and retention of nutrients, and drainage. Ecosystems do this in the most efficient way, in agreement with the landscape, and in response to climatic drivers. In brief, our hydrological system is alive and has a strong capacity to adjust to prevailing and changing circumstances. Although most physically based models take Newtonian theory at heart, as best they can, what they generally miss is Darwinian thinking on how an ecosystem evolves and adjusts its environment to maintain crucial hydrological functions. If this active agent is not reflected in our models, then they miss essential physics. Through a Darwinian approach, we can determine the root zone storage capacity of ecosystems, as a crucial component of hydrological models, determining the partitioning of fluxes and the conservation of moisture to bridge periods of drought. Another crucial element of physical systems is the evolution of drainage patterns, both on and below the surface. On the surface, such patterns facilitate infiltration or surface drainage with minimal erosion; in the unsaturated zone, patterns facilitate efficient replenishment of moisture deficits and preferential drainage when there is excess moisture; in the groundwater, patterns facilitate the efficient and gradual drainage of groundwater, resulting in linear reservoir recession. Models that do not incorporate these patterns are not physical. The parameters in the equations may be adjusted to compensate for the lake of patterns, but this involves scale-dependent calibration. In contrast to what is widely believed, relatively simple conceptual models can accommodate these physical processes accurately and very efficiently.

  10. Spectrum simulation in DTSA-II.

    PubMed

    Ritchie, Nicholas W M

    2009-10-01

    Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.

  11. How Computer-Assisted Teaching in Physics Can Enhance Student Learning

    ERIC Educational Resources Information Center

    Karamustafaoglu, O.

    2012-01-01

    Simple harmonic motion (SHM) is an important topic for physics or science students and has wide applications all over the world. Computer simulations are applications of special interest in physics teaching because they support powerful modeling environments involving physics concepts. This article is aimed to compare the effect of…

  12. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  13. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  14. Joining the yellow hub: Uses of the Simple Application Messaging Protocol in Space Physics analysis tools

    NASA Astrophysics Data System (ADS)

    Génot, V.; André, N.; Cecconi, B.; Bouchemit, M.; Budnik, E.; Bourrel, N.; Gangloff, M.; Dufourg, N.; Hess, S.; Modolo, R.; Renard, B.; Lormant, N.; Beigbeder, L.; Popescu, D.; Toniutti, J.-P.

    2014-11-01

    The interest for data communication between analysis tools in planetary sciences and space physics is illustrated in this paper via several examples of the uses of SAMP. The Simple Application Messaging Protocol is developed in the frame of the IVOA from an earlier protocol called PLASTIC. SAMP enables easy communication and interoperability between astronomy software, stand-alone and web-based; it is now increasingly adopted by the planetary sciences and space physics community. Its attractiveness is based, on one hand, on the use of common file formats for exchange and, on the other hand, on established messaging models. Examples of uses at the CDPP and elsewhere are presented. The CDPP (Centre de Données de la Physique des Plasmas, http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (Automated Multi Dataset Analysis, http://amda.cdpp.eu/) which enables in depth analysis of large amount of data through dedicated functionalities such as: visualization, conditional search and cataloging. Besides AMDA, the 3DView (http://3dview.cdpp.eu/) tool provides immersive visualizations and is further developed to include simulation and observational data. These tools and their interactions with each other, notably via SAMP, are presented via science cases of interest to planetary sciences and space physics communities.

  15. Attraction of swimming microorganisms by solid surfaces

    NASA Astrophysics Data System (ADS)

    Lauga, Eric; Berke, Allison; Turner, Linda; Berg, Howard

    2007-11-01

    Swimming microorganisms such as spermatozoa or bacteria are usually observed to accumulate near surfaces. Here, we report on an experiment aiming at measuring the distribution of smooth-swimming E. coli when moving in a density-matched fluid and between two glass plates. The distribution for the bacteria concentration is found to peak near the glass plates, in agreement with a simple physical model based on the far-field hydrodynamics of swimming cells.

  16. NDE Research At Nondestructive Measurement Science At NASA Langley

    DTIC Science & Technology

    1989-06-01

    our staff include: ultrasonics, nonlinear acoustics , thermal acoustics and diffusion, magnetics , fiber optics, and x-ray tomography . We have a...based on the simple assumption that acoustic waves interact with the sample and reveal "important" properties . In practice, such assumptions have...between the acoustic wave and the media. The most useful models can generally be inverted to determine the physical properties or geometry of the

  17. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  18. A Selected Library of Transport Coefficients for Combustion and Plasma Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-08-01

    COYOTE and similar combustion programs based on the multicomponent Navier-Stokes equations require the mixture viscosity, thermal conductivity, and species transport coefficients as input. This report documents a model of these molecular transport coefficients that is simpler than the general theory, but which provides adequate accuracy for many purposes. This model leads to a computationally convenient, self-contained, and easy-to-use source of such data in a format suitable for use by such programs. We present the data for various neutral species in two forms. The first form is a simple functional fit to the transport coefficients. The second form is the usemore » of tabulated Lennard-Jones parameters in simple theoretical expressions for the gas-phase transport coefficients. The model then is extended to the case of a two-temperature plasma. Lennard-Jones parameters are given for a number of chemical species of interest in combustion research.« less

  19. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    NASA Astrophysics Data System (ADS)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  20. Relating the Stored Magnetic Energy of a Parallel-Plate Inductor to the Work of External Forces

    ERIC Educational Resources Information Center

    Gauthier, N.

    2007-01-01

    Idealized models are often used in introductory physics courses. For one, such models involve simple mathematics, which is a definite plus since complex mathematical manipulations quickly become an obstacle rather than a tool for a beginner. Idealized models facilitate a student's understanding and grasp of a given physical phenomenon, yet they…

  1. From Random Walks to Brownian Motion, from Diffusion to Entropy: Statistical Principles in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Reeves, Mark

    2014-03-01

    Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is dominant contribution of the entropy in driving important biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy) that enable students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce seemingly complex biological processes and structures to be described by tractable models that include deterministic processes and simple probabilistic inference. The students test these models in simulations and in laboratory experiments that are biologically relevant. The students are challenged to bridge the gap between statistical parameterization of their data (mean and standard deviation) and simple model-building by inference. This allows the students to quantitatively describe realistic cellular processes such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront ``random'' forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.

  2. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  3. A Backward-Lagrangian-Stochastic Footprint Model for the Urban Environment

    NASA Astrophysics Data System (ADS)

    Wang, Chenghao; Wang, Zhi-Hua; Yang, Jiachuan; Li, Qi

    2018-02-01

    Built terrains, with their complexity in morphology, high heterogeneity, and anthropogenic impact, impose substantial challenges in Earth-system modelling. In particular, estimation of the source areas and footprints of atmospheric measurements in cities requires realistic representation of the landscape characteristics and flow physics in urban areas, but has hitherto been heavily reliant on large-eddy simulations. In this study, we developed physical parametrization schemes for estimating urban footprints based on the backward-Lagrangian-stochastic algorithm, with the built environment represented by street canyons. The vertical profile of mean streamwise velocity is parametrized for the urban canopy and boundary layer. Flux footprints estimated by the proposed model show reasonable agreement with analytical predictions over flat surfaces without roughness elements, and with experimental observations over sparse plant canopies. Furthermore, comparisons of canyon flow and turbulence profiles and the subsequent footprints were made between the proposed model and large-eddy simulation data. The results suggest that the parametrized canyon wind and turbulence statistics, based on the simple similarity theory used, need to be further improved to yield more realistic urban footprint modelling.

  4. Fluctuations in the DNA double helix

    NASA Astrophysics Data System (ADS)

    Peyrard, M.; López, S. C.; Angelov, D.

    2007-08-01

    DNA is not the static entity suggested by the famous double helix structure. It shows large fluctuational openings, in which the bases, which contain the genetic code, are temporarily open. Therefore it is an interesting system to study the effect of nonlinearity on the physical properties of a system. A simple model for DNA, at a mesoscopic scale, can be investigated by computer simulation, in the same spirit as the original work of Fermi, Pasta and Ulam. These calculations raise fundamental questions in statistical physics because they show a temporary breaking of equipartition of energy, regions with large amplitude fluctuations being able to coexist with regions where the fluctuations are very small, even when the model is studied in the canonical ensemble. This phenomenon can be related to nonlinear excitations in the model. The ability of the model to describe the actual properties of DNA is discussed by comparing theoretical and experimental results for the probability that base pairs open an a given temperature in specific DNA sequences. These studies give us indications on the proper description of the effect of the sequence in the mesoscopic model.

  5. The harmonic oscillator and nuclear physics

    NASA Technical Reports Server (NTRS)

    Rowe, D. J.

    1993-01-01

    The three-dimensional harmonic oscillator plays a central role in nuclear physics. It provides the underlying structure of the independent-particle shell model and gives rise to the dynamical group structures on which models of nuclear collective motion are based. It is shown that the three-dimensional harmonic oscillator features a rich variety of coherent states, including vibrations of the monopole, dipole, and quadrupole types, and rotations of the rigid flow, vortex flow, and irrotational flow types. Nuclear collective states exhibit all of these flows. It is also shown that the coherent state representations, which have their origins in applications to the dynamical groups of the simple harmonic oscillator, can be extended to vector coherent state representations with a much wider range of applicability. As a result, coherent state theory and vector coherent state theory become powerful tools in the application of algebraic methods in physics.

  6. DIY Soundcard Based Temperature Logging System. Part II: Applications

    ERIC Educational Resources Information Center

    Nunn, John

    2016-01-01

    This paper demonstrates some simple applications of how temperature logging systems may be used to monitor simple heat experiments, and how the data obtained can be analysed to get some additional insight into the physical processes. [For "DIY Soundcard Based Temperature Logging System. Part I: Design," see EJ1114124.

  7. Rising tides, cumulative impacts and cascading changes to estuarine ecosystem functions.

    PubMed

    O'Meara, Theresa A; Hillman, Jenny R; Thrush, Simon F

    2017-08-31

    In coastal ecosystems, climate change affects multiple environmental factors, yet most predictive models are based on simple cause-and-effect relationships. Multiple stressor scenarios are difficult to predict because they can create a ripple effect through networked ecosystem functions. Estuarine ecosystem function relies on an interconnected network of physical and biological processes. Estuarine habitats play critical roles in service provision and represent global hotspots for organic matter processing, nutrient cycling and primary production. Within these systems, we predicted functional changes in the impacts of land-based stressors, mediated by changing light climate and sediment permeability. Our in-situ field experiment manipulated sea level, nutrient supply, and mud content. We used these stressors to determine how interacting environmental stressors influence ecosystem function and compared results with data collected along elevation gradients to substitute space for time. We show non-linear, multi-stressor effects deconstruct networks governing ecosystem function. Sea level rise altered nutrient processing and impacted broader estuarine services ameliorating nutrient and sediment pollution. Our experiment demonstrates how the relationships between nutrient processing and biological/physical controls degrade with environmental stress. Our results emphasise the importance of moving beyond simple physically-forced relationships to assess consequences of climate change in the context of ecosystem interactions and multiple stressors.

  8. Physics-model-based nonlinear actuator trajectory optimization and safety factor profile feedback control for advanced scenario development in DIII-D

    DOE PAGES

    Barton, Justin E.; Boyer, Mark D.; Shi, Wenyu; ...

    2015-07-30

    DIII-D experimental results are reported to demonstrate the potential of physics-model-based safety factor profile control for robust and reproducible sustainment of advanced scenarios. In the absence of feedback control, variability in wall conditions and plasma impurities, as well as drifts due to external disturbances, can limit the reproducibility of discharges with simple pre-programmed scenario trajectories. The control architecture utilized is a feedforward + feedback scheme where the feedforward commands are computed off-line and the feedback commands are computed on-line. In this work, firstly a first-principles-driven (FPD), physics-based model of the q profile and normalized beta (β N) dynamics is embeddedmore » into a numerical optimization algorithm to design feedforward actuator trajectories that sheer the plasma through the tokamak operating space to reach a desired stationary target state that is characterized by the achieved q profile and β N. Good agreement between experimental results and simulations demonstrates the accuracy of the models employed for physics-model-based control design. Secondly, a feedback algorithm for q profile control is designed following a FPD approach, and the ability of the controller to achieve and maintain a target q profile evolution is tested in DIII-D high confinement (H-mode) experiments. The controller is shown to be able to effectively control the q profile when β N is relatively close to the target, indicating the need for integrated q profile and β N control to further enhance the ability to achieve robust scenario execution. Furthermore, the ability of an integrated q profile + β N feedback controller to track a desired target is demonstrated through simulation.« less

  9. Text Based Analogy in Overcoming Student Misconception on Simple Electricity Circuit Material

    NASA Astrophysics Data System (ADS)

    Hesti, R.; Maknun, J.; Feranie, S.

    2017-09-01

    Some researcher have found that the use of analogy in learning and teaching physics was effective enough in giving comprehension in a complicated physics concept such as electrical circuits. Meanwhile, misconception become main cause that makes students failed when learning physics. To provide teaching physics effectively, the misconception should be resolved. Using Text Based Analogy is one of the way to identifying misconceptions and it is enough to assist teachers in conveying scientific truths in order to overcome misconceptions. The purpose of the study to investigate the use of text based analogy in overcoming students misconception on simple electrical circuit material. The samples of this research were 28 of junior high school students taken purposively from one high school in South Jakarta. The method use in this research is pre-experimental and design in one shot case study. Students who are the participants of sample have been identified misconception on the electrical circuit material by using the Diagnostic Test of Simple Electricity Circuit. The results of this study found that TBA can replace the misconceptions of the concept possessed by students with scientific truths conveyed in the text in a way that is easily understood so that TBA is strongly recommended to use in other physics materials.

  10. Rotating states of self-propelling particles in two dimensions.

    PubMed

    Chen, Hsuan-Yi; Leung, Kwan-Tai

    2006-05-01

    We present particle-based simulations and a continuum theory for steady rotating flocks formed by self-propelling particles (SPPs) in two-dimensional space. Our models include realistic but simple rules for the self-propelling, drag, and interparticle interactions. Among other coherent structures, in particle-based simulations we find steady rotating flocks when the velocity of the particles lacks long-range alignment. Physical characteristics of the rotating flock are measured and discussed. We construct a phenomenological continuum model and seek steady-state solutions for a rotating flock. We show that the velocity and density profiles become simple in two limits. In the limit of weak alignment, we find that all particles move with the same speed and the density of particles vanishes near the center of the flock due to the divergence of centripetal force. In the limit of strong body force, the density of particles within the flock is uniform and the velocity of the particles close to the center of the flock becomes small.

  11. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  12. An investigation of the effect of instruction in physics on the formation of mental models for problem-solving in the context of simple electric circuits

    NASA Astrophysics Data System (ADS)

    Beh, Kian Lim

    2000-10-01

    This study was designed to explore the effect of a typical traditional method of instruction in physics on the formation of useful mental models among college students for problem-solving using simple electric circuits as a context. The study was also aimed at providing a comprehensive description of the understanding regarding electric circuits among novices and experts. In order to achieve these objectives, the following two research approaches were employed: (1) A students survey to collect data from 268 physics students; and (2) An interview protocol to collect data from 23 physics students and 24 experts (including 10 electrical engineering graduates, 4 practicing electrical engineers, 2 secondary school physics teachers, 8 physics lecturers, and 4 electrical engineers). Among the major findings are: (1) Most students do not possess accurate models of simple electric circuits as presented implicitly in physics textbooks; (2) Most students display good procedural understanding for solving simple problems concerning electric circuits but have no in-depth conceptual understanding in terms of practical knowledge of current, voltage, resistance, and circuit connections; (3) Most students encounter difficulty in discerning parallel connections that are drawn in a non-conventional format; (4) After a year of college physics, students show significant improvement in areas, including practical knowledge of current and voltage, ability to compute effective resistance and capacitance, ability to identify circuit connections, and ability to solve problems; however, no significance was found in practical knowledge of resistance and ability to connect circuits; and (5) The differences and similarities between the physics students and the experts include: (a) Novices perceive parallel circuits more in terms of 'branch', 'current', and 'resistors with the same resistance' while experts perceive parallel circuits more in terms of 'node', 'voltage', and 'less resistance'; and (b) Both novices and experts use phrases such as 'side-by side' and 'one on top of the other' in describing parallel circuits which emphasize the geometry of the standard circuit drawing when describing parallel resistors.

  13. Analytical solution for shear bands in cold-rolled 1018 steel

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Almasri, Amin H.; Faghihi, Danial; Palazotto, Anthony N.

    2012-06-01

    Cold-rolled 1018 (CR-1018) carbon steel has been well known for its susceptibility to adiabatic shear banding under dynamic loadings. Analysis of these localizations highly depends on the selection of the constitutive model. To deal with this issue, a constitutive model that takes temperature and strain rate effect into account is proposed. The model is motivated by two physical-based models: the Zerilli and Armstrong and the Voyiadjis and Abed models. This material model, however, incorporates a simple softening term that is capable of simulating the softening behavior of CR-1018 steel. Instability, localization, and evolution of adiabatic shear bands are discussed and presented graphically. In addition, the effect of hydrostatic pressure is illustrated.

  14. Investigation of shear damage considering the evolution of anisotropy

    NASA Astrophysics Data System (ADS)

    Kweon, S.

    2013-12-01

    The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.

  15. Investigating decoherence in a simple system

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1991-01-01

    The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.

  16. PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments

    NASA Astrophysics Data System (ADS)

    Gaede, F.; Hegner, B.; Mato, P.

    2017-10-01

    PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.

  17. Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model

    NASA Astrophysics Data System (ADS)

    Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi

    Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.

  18. Welding arc plasma physics

    NASA Technical Reports Server (NTRS)

    Cain, Bruce L.

    1990-01-01

    The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.

  19. Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis

    NASA Astrophysics Data System (ADS)

    Springer, Everett P.; Cundy, Terrance W.

    1987-02-01

    Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.

  20. Quantitative critical thinking: Student activities using Bayesian updating

    NASA Astrophysics Data System (ADS)

    Warren, Aaron R.

    2018-05-01

    One of the central roles of physics education is the development of students' ability to evaluate proposed hypotheses and models. This ability is important not just for students' understanding of physics but also to prepare students for future learning beyond physics. In particular, it is often hoped that students will better understand the manner in which physicists leverage the availability of prior knowledge to guide and constrain the construction of new knowledge. Here, we discuss how the use of Bayes' Theorem to update the estimated likelihood of hypotheses and models can help achieve these educational goals through its integration with evaluative activities that use hypothetico-deductive reasoning. Several types of classroom and laboratory activities are presented that engage students in the practice of Bayesian likelihood updating on the basis of either consistency with experimental data or consistency with pre-established principles and models. This approach is sufficiently simple for introductory physics students while offering a robust mechanism to guide relatively sophisticated student reflection concerning models, hypotheses, and problem-solutions. A quasi-experimental study utilizing algebra-based introductory courses is presented to assess the impact of these activities on student epistemological development. The results indicate gains on the Epistemological Beliefs Assessment for Physical Science (EBAPS) at a minimal cost of class-time.

  1. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  2. Vertical cultural transmission effects on demic front propagation: theory and application to the Neolithic transition in Europe.

    PubMed

    Fort, Joaquim

    2011-05-01

    It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined. © 2011 American Physical Society

  3. Physical and Hydrological Meaning of the Spectral Information from Hydrodynamic Signals at Karst Springs

    NASA Astrophysics Data System (ADS)

    Dufoyer, A.; Lecoq, N.; Massei, N.; Marechal, J. C.

    2017-12-01

    Physics-based modeling of karst systems remains almost impossible without enough accurate information about the inner physical characteristics. Usually, the only available hydrodynamic information is the flow rate at the karst outlet. Numerous works in the past decades have used and proven the usefulness of time-series analysis and spectral techniques applied to spring flow, precipitations or even physico-chemical parameters, for interpreting karst hydrological functioning. However, identifying or interpreting the karst systems physical features that control statistical or spectral characteristics of spring flow variations is still challenging, not to say sometimes controversial. The main objective of this work is to determine how the statistical and spectral characteristics of the hydrodynamic signal at karst springs can be related to inner physical and hydraulic properties. In order to address this issue, we undertake an empirical approach based on the use of both distributed and physics-based models, and on synthetic systems responses. The first step of the research is to conduct a sensitivity analysis of time-series/spectral methods to karst hydraulic and physical properties. For this purpose, forward modeling of flow through several simple, constrained and synthetic cases in response to precipitations is undertaken. It allows us to quantify how the statistical and spectral characteristics of flow at the outlet are sensitive to changes (i) in conduit geometries, and (ii) in hydraulic parameters of the system (matrix/conduit exchange rate, matrix hydraulic conductivity and storativity). The flow differential equations resolved by MARTHE, a computer code developed by the BRGM, allows karst conduits modeling. From signal processing on simulated spring responses, we hope to determine if specific frequencies are always modified, thanks to Fourier series and multi-resolution analysis. We also hope to quantify which parameters are the most variable with auto-correlation analysis: first results seem to show higher variations due to conduit conductivity than the ones due to matrix/conduit exchange rate. Future steps will be using another computer code, based on double-continuum approach and allowing turbulent conduit flow, and modeling a natural system.

  4. DIY soundcard based temperature logging system. Part II: applications

    NASA Astrophysics Data System (ADS)

    Nunn, John

    2016-11-01

    This paper demonstrates some simple applications of how temperature logging systems may be used to monitor simple heat experiments, and how the data obtained can be analysed to get some additional insight into the physical processes.

  5. Calculation of the Intensity of Physical Time Fluctuations Using the Standard Solar Model and its Comparison with the Results of Experimental Measurements

    NASA Astrophysics Data System (ADS)

    Morozov, A. N.

    2017-11-01

    The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.

  6. Nucleation and growth of microdroplets of ionic liquids deposited by physical vapor method onto different surfaces

    NASA Astrophysics Data System (ADS)

    Costa, José C. S.; Coelho, Ana F. S. M. G.; Mendes, Adélio; Santos, Luís M. N. B. F.

    2018-01-01

    Nanoscience and technology has generated an important area of research in the field of properties and functionality of ionic liquids (ILs) based materials and their thin films. This work explores the deposition process of ILs droplets as precursors for the fabrication of thin films, by means of physical vapor deposition (PVD). It was found that the deposition (by PVD on glass, indium tin oxide, graphene/nickel and gold-coated quartz crystal surfaces) of imidazolium [C4mim][NTf2] and pyrrolidinium [C4C1Pyrr][NTf2] based ILs generates micro/nanodroplets with a shape, size distribution and surface coverage that could be controlled by the evaporation flow rate and deposition time. No indication of the formation of a wetting-layer prior to the island growth was found. Based on the time-dependent morphological analysis of the micro/nanodroplets, a simple model for the description of the nucleation process and growth of ILs droplets is presented. The proposed model is based on three main steps: minimum free area to promote nucleation; first order coalescence; second order coalescence.

  7. Berimbau: A simple instrument for teaching basic concepts in the physics and psychoacoustics of music

    NASA Astrophysics Data System (ADS)

    Vilão, Rui C.; Melo, Santino L. S.

    2014-12-01

    We address the production of musical tones by a simple musical instrument of the Brazilian tradition: the berimbau-de-barriga. The vibration physics of the string and of the air mass inside the gourd are reviewed. Straightforward measurements of an actual berimbau, which illustrate the basic physical phenomena, are performed using a PC-based "soundcard oscilloscope." The inharmonicity of the string and the role of the gourd are discussed in the context of known results in the psychoacoustics of pitch definition.

  8. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  9. Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity

    NASA Astrophysics Data System (ADS)

    Christian, J. M.

    2017-09-01

    Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.

  10. A simple physical model for deep moonquake occurrence times

    USGS Publications Warehouse

    Weber, R.C.; Bills, B.G.; Johnson, C.L.

    2010-01-01

    The physical process that results in moonquakes is not yet fully understood. The periodic occurrence times of events from individual clusters are clearly related to tidal stress, but also exhibit departures from the temporal regularity this relationship would seem to imply. Even simplified models that capture some of the relevant physics require a large number of variables. However, a single, easily accessible variable - the time interval I(n) between events - can be used to reveal behavior not readily observed using typical periodicity analyses (e.g., Fourier analyses). The delay-coordinate (DC) map, a particularly revealing way to display data from a time series, is a map of successive intervals: I(n+. 1) plotted vs. I(n). We use a DC approach to characterize the dynamics of moonquake occurrence. Moonquake-like DC maps can be reproduced by combining sequences of synthetic events that occur with variable probability at tidal periods. Though this model gives a good description of what happens, it has little physical content, thus providing only little insight into why moonquakes occur. We investigate a more mechanistic model. In this study, we present a series of simple models of deep moonquake occurrence, with consideration of both tidal stress and stress drop during events. We first examine the behavior of inter-event times in a delay-coordinate context, and then examine the output, in that context, of a sequence of simple models of tidal forcing and stress relief. We find, as might be expected, that the stress relieved by moonquakes influences their occurrence times. Our models may also provide an explanation for the opposite-polarity events observed at some clusters. ?? 2010.

  11. A Physics-Inspired Mechanistic Model of Migratory Movement Patterns in Birds.

    PubMed

    Revell, Christopher; Somveille, Marius

    2017-08-29

    In this paper, we introduce a mechanistic model of migratory movement patterns in birds, inspired by ideas and methods from physics. Previous studies have shed light on the factors influencing bird migration but have mainly relied on statistical correlative analysis of tracking data. Our novel method offers a bottom up explanation of population-level migratory movement patterns. It differs from previous mechanistic models of animal migration and enables predictions of pathways and destinations from a given starting location. We define an environmental potential landscape from environmental data and simulate bird movement within this landscape based on simple decision rules drawn from statistical mechanics. We explore the capacity of the model by qualitatively comparing simulation results to the non-breeding migration patterns of a seabird species, the Black-browed Albatross (Thalassarche melanophris). This minimal, two-parameter model was able to capture remarkably well the previously documented migration patterns of the Black-browed Albatross, with the best combination of parameter values conserved across multiple geographically separate populations. Our physics-inspired mechanistic model could be applied to other bird and highly-mobile species, improving our understanding of the relative importance of various factors driving migration and making predictions that could be useful for conservation.

  12. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  13. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models.

    PubMed

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-05-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  14. Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.

    PubMed

    Stierstorfer, Karl

    2018-01-01

    To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.

  15. A gentle introduction to Rasch measurement models for metrologists

    NASA Astrophysics Data System (ADS)

    Mari, Luca; Wilson, Mark

    2013-09-01

    The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models.

  16. Time-frequency analysis of acoustic scattering from elastic objects

    NASA Astrophysics Data System (ADS)

    Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.

    1990-06-01

    A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.

  17. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  18. Effect of the Environment and Environmental Uncertainty on Ship Routes

    DTIC Science & Technology

    2012-06-01

    models consisting of basic differential equations simulating the fluid dynamic process and physics of the environment. Based on Newton’s second law of...Charles and Hazel Hall, for their unconditional love and support. They were there for me during this entire process , as they have been throughout...A simple transit across the Atlantic Ocean can easily become a rough voyage if the ship encounters high winds, which in turn will cause a high sea

  19. Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences

    NASA Astrophysics Data System (ADS)

    Peng, Yingjie

    2012-01-01

    The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.

  20. How Fast Can You Go on a Bicycle?

    ERIC Educational Resources Information Center

    Dunning, R. B.

    2009-01-01

    The bicycle provides a context-rich problem accessible to students in a first-year physics course, encircling several core physics principles such as conservation of total energy and angular momentum, dissipative forces, and vectors. In this article, I develop a simple numerical model that can be used by any first-year physics student to…

  1. Four simple ocean carbon models

    NASA Technical Reports Server (NTRS)

    Moore, Berrien, III

    1992-01-01

    This paper briefly reviews the key processes that determine oceanic CO2 uptake and sets this description within the context of four simple ocean carbon models. These models capture, in varying degrees, these key processes and establish a clear foundation for more realistic models that incorporate more directly the underlying physics and biology of the ocean rather than relying on simple parametric schemes. The purpose of this paper is more pedagogical than purely scientific. The problems encountered by current attempts to understand the global carbon cycle not only require our efforts but set a demand for a new generation of scientist, and it is hoped that this paper and the text in which it appears will help in this development.

  2. Nonequilibrium thermodynamics of the shear-transformation-zone model

    NASA Astrophysics Data System (ADS)

    Luo, Alan M.; Ã-ttinger, Hans Christian

    2014-02-01

    The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.

  3. SYVA: A program to analyze symmetry of molecules based on vector algebra

    NASA Astrophysics Data System (ADS)

    Gyevi-Nagy, László; Tasi, Gyula

    2017-06-01

    Symmetry is a useful concept in physics and chemistry. It can be used to find out some simple properties of a molecule or simplify complex calculations. In this paper a simple vector algebraic method is described to determine all symmetry elements of an arbitrary molecule. To carry out the symmetry analysis, a program has been written, which is also capable of generating the framework group of the molecule, revealing the symmetry properties of normal modes of vibration and symmetrizing the structure. To demonstrate the capabilities of the program, it is compared to other common widely used stand-alone symmetry analyzer (SYMMOL, Symmetrizer) and molecular modeling (NWChem, ORCA, MRCC) software. SYVA can generate input files for molecular modeling programs, e.g. Gaussian, using precisely symmetrized molecular structures. Possible applications are also demonstrated by integrating SYVA with the GAMESS and MRCC software.

  4. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  5. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  6. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  7. A Simple Physical Model for Spall from Nuclear Explosions Based Upon Two-Dimensional Nonlinear Numerical Simulations

    DTIC Science & Technology

    1990-05-01

    forms included (1) analytic distribu- tions of initial velocities which initiate at the same instant across the crack ( t o is con - stant), (2) random...gAH(O,tl) + (19) [jLVgf (Vg)- gMo (vg ,V2 )]AH(t1l,t2) We note that for any distribution d)(v), the high frequency response will be dominated by the 8...body waves from the tension crack model is a narrowband signal. To see this, consider Equation (25). As w-O, P (co) approaches a constant pro

  8. The Design and Construction of a Simple Transmission Electron Microscope for Educational Purposes.

    ERIC Educational Resources Information Center

    Hearsey, Paul K.

    This document presents a model for a simple transmission electron microscope for educational purposes. This microscope could demonstrate thermonic emission, particle acceleration, electron deflection, and flourescence. It is designed to be used in high school science courses, particularly physics, taking into account the size, weight, complexity…

  9. A Simple Relativistic Bohr Atom

    ERIC Educational Resources Information Center

    Terzis, Andreas F.

    2008-01-01

    A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…

  10. Strength/Brittleness Classification of Igneous Intact Rocks Based on Basic Physical and Dynamic Properties

    NASA Astrophysics Data System (ADS)

    Aligholi, Saeed; Lashkaripour, Gholam Reza; Ghafoori, Mohammad

    2017-01-01

    This paper sheds further light on the fundamental relationships between simple methods, rock strength, and brittleness of igneous rocks. In particular, the relationship between mechanical (point load strength index I s(50) and brittleness value S 20), basic physical (dry density and porosity), and dynamic properties (P-wave velocity and Schmidt rebound values) for a wide range of Iranian igneous rocks is investigated. First, 30 statistical models (including simple and multiple linear regression analyses) were built to identify the relationships between mechanical properties and simple methods. The results imply that rocks with different Schmidt hardness (SH) rebound values have different physicomechanical properties or relations. Second, using these results, it was proved that dry density, P-wave velocity, and SH rebound value provide a fine complement to mechanical properties classification of rock materials. Further, a detailed investigation was conducted on the relationships between mechanical and simple tests, which are established with limited ranges of P-wave velocity and dry density. The results show that strength values decrease with the SH rebound value. In addition, there is a systematic trend between dry density, P-wave velocity, rebound hardness, and brittleness value of the studied rocks, and rocks with medium hardness have a higher brittleness value. Finally, a strength classification chart and a brittleness classification table are presented, providing reliable and low-cost methods for the classification of igneous rocks.

  11. Electrical conductivity of metal powders under pressure

    NASA Astrophysics Data System (ADS)

    Montes, J. M.; Cuevas, F. G.; Cintas, J.; Urban, P.

    2011-12-01

    A model for calculating the electrical conductivity of a compressed powder mass consisting of oxide-coated metal particles has been derived. A theoretical tool previously developed by the authors, the so-called `equivalent simple cubic system', was used in the model deduction. This tool is based on relating the actual powder system to an equivalent one consisting of deforming spheres packed in a simple cubic lattice, which is much easier to examine. The proposed model relates the effective electrical conductivity of the powder mass under compression to its level of porosity. Other physically measurable parameters in the model are the conductivities of the metal and oxide constituting the powder particles, their radii, the mean thickness of the oxide layer and the tap porosity of the powder. Two additional parameters controlling the effect of the descaling of the particle oxide layer were empirically introduced. The proposed model was experimentally verified by measurements of the electrical conductivity of aluminium, bronze, iron, nickel and titanium powders under pressure. The consistency between theoretical predictions and experimental results was reasonably good in all cases.

  12. Wave cybernetics: A simple model of wave-controlled nonlinear and nonlocal cooperative phenomena

    NASA Astrophysics Data System (ADS)

    Yasue, Kunio

    1988-09-01

    A simple theoretical description of nonlinear and nonlocal cooperative phenomena is presented in which the global control mechanism of the whole system is given by the tuned-wave propagation. It provides us with an interesting universal scheme of systematization in physical and biological systems called wave cybernetics, and may be understood as a model realizing Bohm's idea of implicate order in natural philosophy.

  13. Impact Crater Experiments for Introductory Physics and Astronomy Laboratories

    ERIC Educational Resources Information Center

    Claycomb, J. R.

    2009-01-01

    Activity-based collisional analysis is developed for introductory physics and astronomy laboratory experiments. Crushable floral foam is used to investigate the physics of projectiles undergoing completely inelastic collisions with a low-density solid forming impact craters. Simple drop experiments enable determination of the average acceleration,…

  14. Phase space effects on fast ion distribution function modeling in tokamaks

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.

    2016-05-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  15. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE Data Explorer

    White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-06-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  16. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  17. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  18. Prior automatic posture and activity identification improves physical activity energy expenditure prediction from hip-worn triaxial accelerometry.

    PubMed

    Garnotel, M; Bastian, T; Romero-Ugalde, H M; Maire, A; Dugas, J; Zahariev, A; Doron, M; Jallon, P; Charpentier, G; Franc, S; Blanc, S; Bonnet, S; Simon, C

    2018-03-01

    Accelerometry is increasingly used to quantify physical activity (PA) and related energy expenditure (EE). Linear regression models designed to derive PAEE from accelerometry-counts have shown their limits, mostly due to the lack of consideration of the nature of activities performed. Here we tested whether a model coupling an automatic activity/posture recognition (AAR) algorithm with an activity-specific count-based model, developed in 61 subjects in laboratory conditions, improved PAEE and total EE (TEE) predictions from a hip-worn triaxial-accelerometer (ActigraphGT3X+) in free-living conditions. Data from two independent subject groups of varying body mass index and age were considered: 20 subjects engaged in a 3-h urban-circuit, with activity-by-activity reference PAEE from combined heart-rate and accelerometry monitoring (Actiheart); and 56 subjects involved in a 14-day trial, with PAEE and TEE measured using the doubly-labeled water method. PAEE was estimated from accelerometry using the activity-specific model coupled to the AAR algorithm (AAR model), a simple linear model (SLM), and equations provided by the companion-software of used activity-devices (Freedson and Actiheart models). AAR-model predictions were in closer agreement with selected references than those from other count-based models, both for PAEE during the urban-circuit (RMSE = 6.19 vs 7.90 for SLM and 9.62 kJ/min for Freedson) and for EE over the 14-day trial, reaching Actiheart performances in the latter (PAEE: RMSE = 0.93 vs. 1.53 for SLM, 1.43 for Freedson, 0.91 MJ/day for Actiheart; TEE: RMSE = 1.05 vs. 1.57 for SLM, 1.70 for Freedson, 0.95 MJ/day for Actiheart). Overall, the AAR model resulted in a 43% increase of daily PAEE variance explained by accelerometry predictions. NEW & NOTEWORTHY Although triaxial accelerometry is widely used in free-living conditions to assess the impact of physical activity energy expenditure (PAEE) on health, its precision and accuracy are often debated. Here we developed and validated an activity-specific model which, coupled with an automatic activity-recognition algorithm, improved the variance explained by the predictions from accelerometry counts by 43% of daily PAEE compared with models relying on a simple relationship between accelerometry counts and EE.

  19. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  20. Cellular Gauge Symmetry and the Li Organization Principle: A Mathematical Addendum. Quantifying energetic dynamics in physical and biological systems through a simple geometric tool and geodetic curves.

    PubMed

    Yurkin, Alexander; Tozzi, Arturo; Peters, James F; Marijuán, Pedro C

    2017-12-01

    The present Addendum complements the accompanying paper "Cellular Gauge Symmetry and the Li Organization Principle"; it illustrates a recently-developed geometrical physical model able to assess electronic movements and energetic paths in atomic shells. The model describes a multi-level system of circular, wavy and zigzag paths which can be projected onto a horizontal tape. This model ushers in a visual interpretation of the distribution of atomic electrons' energy levels and the corresponding quantum numbers through rather simple tools, such as compasses, rulers and straightforward calculations. Here we show how this geometrical model, with the due corrections, among them the use of geodetic curves, might be able to describe and quantify the structure and the temporal development of countless physical and biological systems, from Langevin equations for random paths, to symmetry breaks occurring ubiquitously in physical and biological phenomena, to the relationships among different frequencies of EEG electric spikes. Therefore, in our work we explore the possible association of binomial distribution and geodetic curves configuring a uniform approach for the research of natural phenomena, in biology, medicine or the neurosciences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  2. Modeling food matrix effects on chemical reactivity: Challenges and perspectives.

    PubMed

    Capuano, Edoardo; Oliviero, Teresa; van Boekel, Martinus A J S

    2017-06-29

    The same chemical reaction may be different in terms of its position of the equilibrium (i.e., thermodynamics) and its kinetics when studied in different foods. The diversity in the chemical composition of food and in its structural organization at macro-, meso-, and microscopic levels, that is, the food matrix, is responsible for this difference. In this viewpoint paper, the multiple, and interconnected ways the food matrix can affect chemical reactivity are summarized. Moreover, mechanistic and empirical approaches to explain and predict the effect of food matrix on chemical reactivity are described. Mechanistic models aim to quantify the effect of food matrix based on a detailed understanding of the chemical and physical phenomena occurring in food. Their applicability is limited at the moment to very simple food systems. Empirical modeling based on machine learning combined with data-mining techniques may represent an alternative, useful option to predict the effect of the food matrix on chemical reactivity and to identify chemical and physical properties to be further tested. In such a way the mechanistic understanding of the effect of the food matrix on chemical reactions can be improved.

  3. Web-based Interactive Landform Simulation Model - Grand Canyon

    NASA Astrophysics Data System (ADS)

    Luo, W.; Pelletier, J. D.; Duffin, K.; Ormand, C. J.; Hung, W.; Iverson, E. A.; Shernoff, D.; Zhai, X.; Chowdary, A.

    2013-12-01

    Earth science educators need interactive tools to engage and enable students to better understand how Earth systems work over geologic time scales. The evolution of landforms is ripe for interactive, inquiry-based learning exercises because landforms exist all around us. The Web-based Interactive Landform Simulation Model - Grand Canyon (WILSIM-GC, http://serc.carleton.edu/landform/) is a continuation and upgrade of the simple cellular automata (CA) rule-based model (WILSIM-CA, http://www.niu.edu/landform/) that can be accessed from anywhere with an Internet connection. Major improvements in WILSIM-GC include adopting a physically based model and the latest Java technology. The physically based model is incorporated to illustrate the fluvial processes involved in land-sculpting pertaining to the development and evolution of one of the most famous landforms on Earth: the Grand Canyon. It is hoped that this focus on a famous and specific landscape will attract greater student interest and provide opportunities for students to learn not only how different processes interact to form the landform we observe today, but also how models and data are used together to enhance our understanding of the processes involved. The latest development in Java technology (such as Java OpenGL for access to ubiquitous fast graphics hardware, Trusted Applet for file input and output, and multithreaded ability to take advantage of modern multi-core CPUs) are incorporated into building WILSIM-GC and active, standards-aligned curricula materials guided by educational psychology theory on science learning will be developed to accompany the model. This project is funded NSF-TUES program.

  4. Assessing groundwater policy with coupled economic-groundwater hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mulligan, Kevin B.; Brown, Casey; Yang, Yi-Chen E.; Ahlfeld, David P.

    2014-03-01

    This study explores groundwater management policies and the effect of modeling assumptions on the projected performance of those policies. The study compares an optimal economic allocation for groundwater use subject to streamflow constraints, achieved by a central planner with perfect foresight, with a uniform tax on groundwater use and a uniform quota on groundwater use. The policies are compared with two modeling approaches, the Optimal Control Model (OCM) and the Multi-Agent System Simulation (MASS). The economic decision models are coupled with a physically based representation of the aquifer using a calibrated MODFLOW groundwater model. The results indicate that uniformly applied policies perform poorly when simulated with more realistic, heterogeneous, myopic, and self-interested agents. In particular, the effects of the physical heterogeneity of the basin and the agents undercut the perceived benefits of policy instruments assessed with simple, single-cell groundwater modeling. This study demonstrates the results of coupling realistic hydrogeology and human behavior models to assess groundwater management policies. The Republican River Basin, which overlies a portion of the Ogallala aquifer in the High Plains of the United States, is used as a case study for this analysis.

  5. Critical power for self-focusing of optical beam in absorbing media

    NASA Astrophysics Data System (ADS)

    Qi, Pengfei; Zhang, Lin; Lin, Lie; Zhang, Nan; Wang, Yan; Liu, Weiwei

    2018-04-01

    Self-focusing effects are of central importance for most nonlinear optical effects. The critical power for self-focusing is commonly investigated theoretically without considering a material’s absorption. Although this is practicable for various materials, investigating the critical power for self-focusing in media with non-negligible absorption is also necessary, because this is the situation usually met in practice. In this paper, the simple analytical expressions describing the relationships among incident power, absorption coefficient and focal position are provided by a simple physical model based on the Fermat principle. Expressions for the absorption dependent critical power are also derived; these can play important roles in experimental and applied research on self-focusing-related nonlinear optical phenomena in absorbing media. Numerical results, based on the nonlinear wave equation—and which can predict experimental results perfectly—are also presented, and agree quantitatively with the analytical results proposed in this paper.

  6. Icing Branch Current Research Activities in Icing Physics

    NASA Technical Reports Server (NTRS)

    Vargas, Mario

    2009-01-01

    Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.

  7. Simple universal models capture all classical spin physics.

    PubMed

    De las Cuevas, Gemma; Cubitt, Toby S

    2016-03-11

    Spin models are used in many studies of complex systems because they exhibit rich macroscopic behavior despite their microscopic simplicity. Here, we prove that all the physics of every classical spin model is reproduced in the low-energy sector of certain "universal models," with at most polynomial overhead. This holds for classical models with discrete or continuous degrees of freedom. We prove necessary and sufficient conditions for a spin model to be universal and show that one of the simplest and most widely studied spin models, the two-dimensional Ising model with fields, is universal. Our results may facilitate physical simulations of Hamiltonians with complex interactions. Copyright © 2016, American Association for the Advancement of Science.

  8. What Is a Simple Liquid?

    NASA Astrophysics Data System (ADS)

    Ingebrigtsen, Trond S.; Schrøder, Thomas B.; Dyre, Jeppe C.

    2012-01-01

    This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials) and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS). This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r-n pair potentials with n=18,6,4), Lennard-Jones (LJ) models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture), the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be characterized in three quite different ways: (1) chemically by the fact that the liquid’s properties are fully determined by interactions from the molecules within the FCS, (2) physically by the fact that there are isomorphs in the phase diagram, i.e., curves along which several properties like excess entropy, structure, and dynamics, are invariant in reduced units, and (3) mathematically by the fact that throughout the phase diagram the reduced-coordinate constant-potential-energy hypersurfaces define a one-parameter family of compact Riemannian manifolds. No proof is given that the chemical characterization follows from the strong correlation property, but we show that this FCS characterization is consistent with the existence of isomorphs in strongly correlating liquids’ phase diagram. Finally, we note that the FCS characterization of simple liquids calls into question the physical basis of standard perturbation theory, according to which the repulsive and attractive forces play fundamentally different roles for the physics of liquids.

  9. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    NASA Astrophysics Data System (ADS)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  10. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  11. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    NASA Astrophysics Data System (ADS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-10-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ˜10-11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  12. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE PAGES

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew; ...

    2017-10-04

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  13. Constraining the magmatic system at Mount St. Helens (2004–2008) using Bayesian inversion with physics-based models including gas escape and crystallization

    USGS Publications Warehouse

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle R.

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5wt%) total volatiles and that the magma permeability scale is well constrained at ~10-11.4 m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  14. When Simple Harmonic Motion Is Not that Simple: Managing Epistemological Complexity by Using Computer-Based Representations

    ERIC Educational Resources Information Center

    Parnafes, Orit

    2010-01-01

    Many real-world phenomena, even "simple" physical phenomena such as natural harmonic motion, are complex in the sense that they require coordinating multiple subtle foci of attention to get the required information when experiencing them. Moreover, for students to develop sound understanding of a concept or a phenomenon, they need to learn to get…

  15. Between tide and wave marks: a unifying model of physical zonation on littoral shores

    PubMed Central

    Bird, Christopher E.; Franklin, Erik C.; Smith, Celia M.

    2013-01-01

    The effects of tides on littoral marine habitats are so ubiquitous that shorelines are commonly described as ‘intertidal’, whereas waves are considered a secondary factor that simply modifies the intertidal habitat. However mean significant wave height exceeds tidal range at many locations worldwide. Here we construct a simple sinusoidal model of coastal water level based on both tidal range and wave height. From the patterns of emergence and submergence predicted by the model, we derive four vertical shoreline benchmarks which bracket up to three novel, spatially distinct, and physically defined zones. The (1) emergent tidal zone is characterized by tidally driven emergence in air; the (2) wave zone is characterized by constant (not periodic) wave wash; and the (3) submergent tidal zone is characterized by tidally driven submergence. The decoupling of tidally driven emergence and submergence made possible by wave action is a critical prediction of the model. On wave-dominated shores (wave height ≫ tidal range), all three zones are predicted to exist separately, but on tide-dominated shores (tidal range ≫ wave height) the wave zone is absent and the emergent and submergent tidal zones overlap substantially, forming the traditional “intertidal zone”. We conclude by incorporating time and space in the model to illustrate variability in the physical conditions and zonation on littoral shores. The wave:tide physical zonation model is a unifying framework that can facilitate our understanding of physical conditions on littoral shores whether tropical or temperate, marine or lentic. PMID:24109544

  16. Deep magmatism alters and erodes lithosphere and facilitates decoupling of Rwenzori crustal block

    NASA Astrophysics Data System (ADS)

    Wallner, Herbert; Schmeling, Harro

    2013-04-01

    The title is the answer to the initiating question "Why are the Rwenzori Mountains so high?" posed at the EGU 2008. Our motivation origins in the extreme topography of the Rwenzori Mountains. The strong, cold proterozoic crustal horst is situated between rift segments of the western branch of the East African Rift System. Ideas of rift induced delamination (RID) and melt induced weakening (MIW) have been tested with one- and two-phase flow physics. Numerical model parameter variations and new observations lead to a favoured model with simple and plausible definitions. Results coincide in the scope of their comparability with different observations or vice versa reduce ambiguity and uncertainties in model input. Principle laws of the thermo-mechanical physics are the equations of conservation of mass, momentum, energy and composition for a two-phase (matrix-melt) system with nonlinear rheology. A simple solid solution model determines melting and solidification under consideration of depletion and enrichment. The Finite Difference Method with markers is applied to visco-plastic flow using the streamfunction in an Eulerian formulation in 2D. The Compaction Boussinesq and the high Prandtl number Approximation are employed. Lateral kinematic boundary conditions provide long-wavelength asthenospheric upwelling and extensional stress conditions. Partial melts are generated in the asthenosphere, extracted above a critical fraction, and emplaced into a given intrusion level. Temperature anomalies positioned beneath the future rifts, the sole specialization to the Rwenzori situation, localize melts which are very effective in weakening the lithosphere. Convection patterns tend to generate dripping instabilities at the lithospheric base; multiple slabs detach and distort uprising asthenosphere; plumes migrate, join and split. In spite of appearing chaotic flow behaviour a characteristic recurrence time of high velocity events (drips, plumes) emerges. Chimneys of increased enrichment develop above the anomalies and evolve to narrow low viscous mechanical decoupling zones. Deep rooting dynamic forces then affect the surface, showing a vigorous topography. A geodynamic model, linking magmatism. mantle dynamics and lithospheric extension, qualitatively explains most of observed phenomena. Depending on physical model parameters we cover the whole spectrum from dripping lithospheric base instabilities to the full break off of the mantle lithosphere block below the Rwenzoris.

  17. Physics of giant electromagnetic pulse generation in short-pulse laser experiments.

    PubMed

    Poyé, A; Hulin, S; Bailly-Grandvaux, M; Dubois, J-L; Ribolzi, J; Raffestin, D; Bardon, M; Lubrano-Lavaderci, F; D'Humières, E; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-04-01

    In this paper we describe the physical processes that lead to the generation of giant electromagnetic pulses (GEMPs) at powerful laser facilities. Our study is based on experimental measurements of both the charging of a solid target irradiated by an ultra-short, ultra-intense laser and the detection of the electromagnetic emission in the GHz domain. An unambiguous correlation between the neutralization current in the target holder and the electromagnetic emission shows that the source of the GEMP is the remaining positive charge inside the target after the escape of fast electrons accelerated by the ultra-intense laser. A simple model for calculating this charge in the thick target case is presented. From this model and knowing the geometry of the target holder, it becomes possible to estimate the intensity and the dominant frequencies of the GEMP at any facility.

  18. Symmetry Breaking, Unification, and Theories Beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Yasunori

    2009-07-31

    A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressingmore » the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.« less

  19. Unveiling the mechanisms of dressed-photon-phonon etching based on hierarchical surface roughness measure

    NASA Astrophysics Data System (ADS)

    Naruse, Makoto; Yatsui, Takashi; Nomura, Wataru; Kawazoe, Tadashi; Aida, Masaki; Ohtsu, Motoichi

    2013-02-01

    Dressed-photon-phonon (DPP) etching is a disruptive technology in planarizing material surfaces because it completely eliminates mechanical contact processes. However, adequate metrics for evaluating the surface roughness and the underlying physical mechanisms are still not well understood. Here, we propose a two-dimensional hierarchical surface roughness measure, inspired by the Allan variance, that represents the effectiveness of DPP etching while conserving the original two-dimensional surface topology. Also, we build a simple physical model of DPP etching that agrees well with the experimental observations, which clearly shows the involvement of the intrinsic hierarchical properties of dressed photons, or optical near-fields, in the surface processing.

  20. PlayDoh and Toothpicks and Gummy Bears... OH MY, They're Models!

    NASA Astrophysics Data System (ADS)

    Kolandaivelu, K. P.; Wilson, M. W.; Glesener, G. B.

    2017-12-01

    Simple, everyday items found around the house are often used in geoscience lab activities. Gummy bears and silly putty can model the bending and breaking behaviour of rocks; shaking buildings during an earthquake can be modeled with some Jello, toothpicks, and marshmallows; PlayDoh can be used to demonstrate layers of sedimentary rocks; and even plumbing pipes filled with pebbles and playground sand become miniature physical models of aquifers. When performed correctly, these activities can help students visualize geoscience phenomena or increase students' motivation to pay attention in class, but how do these activities help students develop ways to think like a scientist? "Developing and using models" is one of the important science and engineering practices recommended in the Next Generation Science Standards (NGSS). In this presentation, we will demonstrate a variety of common geoscience lab activities using simple, everyday household items in order to describe ways instructors can help their students develop model-based reasoning skills. Specific areas of interest will be on identifying positive and negative attributes of a model, ways to evaluate the reliability of a model, and how a model can be revised to improve its outcome. We will also outline other kinds of models that can be generated from these lab activities, such as mathematical, graphical, and verbal models. Our goal is to encourage educators to focus more time on helping students develop model-based reasoning skills, which can be used in almost all aspects of everyday life.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  2. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  3. Predictive Rotation Profile Control for the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Wehner, W. P.; Schuster, E.; Boyer, M. D.; Walker, M. L.; Humphreys, D. A.

    2017-10-01

    Control-oriented modeling and model-based control of the rotation profile are employed to build a suitable control capability for aiding rotation-related physics studies at DIII-D. To obtain a control-oriented model, a simplified version of the momentum balance equation is combined with empirical representations of the momentum sources. The control approach is rooted in a Model Predictive Control (MPC) framework to regulate the rotation profile while satisfying constraints associated with the desired plasma stored energy and/or βN limit. Simple modifications allow for alternative control objectives, such as maximizing the plasma rotation while maintaining a specified input torque. Because the MPC approach can explicitly incorporate various types of constraints, this approach is well suited to a variety of control objectives, and therefore serves as a valuable tool for experimental physics studies. Closed-loop TRANSP simulations are presented to demonstrate the effectiveness of the control approach. Supported by the US DOE under DE-SC0010661 and DE-FC02-04ER54698.

  4. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    NASA Astrophysics Data System (ADS)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  5. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  6. Quantifying control effort of biological and technical movements: an information-entropy-based approach.

    PubMed

    Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S

    2014-01-01

    In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.

  7. Method for the simulation of blood platelet shape and its evolution during activation

    PubMed Central

    Muliukov, Artem R.; Litvinenko, Alena L.; Nekrasov, Vyacheslav M.; Chernyshev, Andrei V.; Maltsev, Valeri P.

    2018-01-01

    We present a simple physically based quantitative model of blood platelet shape and its evolution during agonist-induced activation. The model is based on the consideration of two major cytoskeletal elements: the marginal band of microtubules and the submembrane cortex. Mathematically, we consider the problem of minimization of surface area constrained to confine the marginal band and a certain cellular volume. For resting platelets, the marginal band appears as a peripheral ring, allowing for the analytical solution of the minimization problem. Upon activation, the marginal band coils out of plane and forms 3D convoluted structure. We show that its shape is well approximated by an overcurved circle, a mathematical concept of closed curve with constant excessive curvature. Possible mechanisms leading to such marginal band coiling are discussed, resulting in simple parametric expression for the marginal band shape during platelet activation. The excessive curvature of marginal band is a convenient state variable which tracks the progress of activation. The cell surface is determined using numerical optimization. The shapes are strictly mathematically defined by only three parameters and show good agreement with literature data. They can be utilized in simulation of platelets interaction with different physical fields, e.g. for the description of hydrodynamic and mechanical properties of platelets, leading to better understanding of platelets margination and adhesion and thrombus formation in blood flow. It would also facilitate precise characterization of platelets in clinical diagnosis, where a novel optical model is needed for the correct solution of inverse light-scattering problem. PMID:29518073

  8. Phase properties of elastic waves in systems constituted of adsorbed diatomic molecules on the (001) surface of a simple cubic crystal

    NASA Astrophysics Data System (ADS)

    Deymier, P. A.; Runge, K.

    2018-03-01

    A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.

  9. Multiphase flow in geometrically simple fracture intersections

    USGS Publications Warehouse

    Basagaoglu, H.; Meakin, P.; Green, C.T.; Mathew, M.; ,

    2006-01-01

    A two-dimensional lattice Boltzmann (LB) model with fluid-fluid and solid-fluid interaction potentials was used to study gravity-driven flow in geometrically simple fracture intersections. Simulated scenarios included fluid dripping from a fracture aperture, two-phase flow through intersecting fractures and thin-film flow on smooth and undulating solid surfaces. Qualitative comparisons with recently published experimental findings indicate that for these scenarios the LB model captured the underlying physics reasonably well.

  10. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  11. A simplified lumped model for the optimization of post-buckled beam architecture wideband generator

    NASA Astrophysics Data System (ADS)

    Liu, Weiqun; Formosa, Fabien; Badel, Adrien; Hu, Guangdi

    2017-11-01

    Buckled beams structures are a classical kind of bistable energy harvesters which attract more and more interests because of their capability to scavenge energy over a large frequency band in comparison with linear generator. The usual modeling approach uses the Galerkin mode discretization method with relatively high complexity, while the simplification with a single-mode solution lacks accuracy. It stems on the optimization of the energy potential features to finally define the physical and geometrical parameters. Therefore, in this paper, a simple lumped model is proposed with explicit relationship between the potential shape and parameters to allow efficient design of bistable beams based generator. The accuracy of the approximation model is studied with the effectiveness of application analyzed. Moreover, an important fact, that the bending stiffness has little influence on the potential shape with low buckling level and the sectional area determined, is found. This feature extends the applicable range of the model by utilizing the design of high moment of inertia. Numerical investigations demonstrate that the proposed model is a simple and reliable tool for design. An optimization example of using the proposed model is demonstrated with satisfactory performance.

  12. The Millennial model: in search of measurable pools and transformations for modeling soil carbon in the new century

    DOE PAGES

    Abramoff, Rose; Xu, Xiaofeng; Hartman, Melannie; ...

    2017-12-20

    Soil organic carbon (SOC) can be defined by measurable chemical and physical pools, such as mineral-associated carbon, carbon physically entrapped in aggregates, dissolved carbon, and fragments of plant detritus. Yet, most soil models use conceptual rather than measurable SOC pools. What would the traditional pool-based soil model look like if it were built today, reflecting the latest understanding of biological, chemical, and physical transformations in soils? We propose a conceptual model—the Millennial model—that defines pools as measurable entities. First, we discuss relevant pool definitions conceptually and in terms of the measurements that can be used to quantify pool size, formation,more » and destabilization. Then, we develop a numerical model following the Millennial model conceptual framework to evaluate against the Century model, a widely-used standard for estimating SOC stocks across space and through time. The Millennial model predicts qualitatively similar changes in total SOC in response to single factor perturbations when compared to Century, but different responses to multiple factor perturbations. Finally, we review important conceptual and behavioral differences between the Millennial and Century modeling approaches, and the field and lab measurements needed to constrain parameter values. Here, we propose the Millennial model as a simple but comprehensive framework to model SOC pools and guide measurements for further model development.« less

  13. The Millennial model: in search of measurable pools and transformations for modeling soil carbon in the new century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramoff, Rose; Xu, Xiaofeng; Hartman, Melannie

    Soil organic carbon (SOC) can be defined by measurable chemical and physical pools, such as mineral-associated carbon, carbon physically entrapped in aggregates, dissolved carbon, and fragments of plant detritus. Yet, most soil models use conceptual rather than measurable SOC pools. What would the traditional pool-based soil model look like if it were built today, reflecting the latest understanding of biological, chemical, and physical transformations in soils? We propose a conceptual model—the Millennial model—that defines pools as measurable entities. First, we discuss relevant pool definitions conceptually and in terms of the measurements that can be used to quantify pool size, formation,more » and destabilization. Then, we develop a numerical model following the Millennial model conceptual framework to evaluate against the Century model, a widely-used standard for estimating SOC stocks across space and through time. The Millennial model predicts qualitatively similar changes in total SOC in response to single factor perturbations when compared to Century, but different responses to multiple factor perturbations. Finally, we review important conceptual and behavioral differences between the Millennial and Century modeling approaches, and the field and lab measurements needed to constrain parameter values. Here, we propose the Millennial model as a simple but comprehensive framework to model SOC pools and guide measurements for further model development.« less

  14. Physics Notes

    ERIC Educational Resources Information Center

    School Science Review, 1972

    1972-01-01

    Short articles describe a method of introducing the study of simple harmonic motion, and suggest models that are analogues for impedence matching, electrical transformers, and birefringent crystals. (AL)

  15. How human drivers control their vehicle

    NASA Astrophysics Data System (ADS)

    Wagner, P.

    2006-08-01

    The data presented here show that human drivers apply a discrete noisy control mechanism to drive their vehicle. A car-following model built on these observations, together with some physical limitations (crash-freeness, acceleration), lead to non-Gaussian probability distributions in the speed difference and distance which are in good agreement with empirical data. All model parameters have a clear physical meaning and can be measured. Despite its apparent complexity, this model is simple to understand and might serve as a starting point to develop even quantitatively correct models.

  16. An Intelligent and Interactive Simulation and Tutoring Environment for Exploring and Learning Simple Machines

    NASA Astrophysics Data System (ADS)

    Myneni, Lakshman Sundeep

    Students in middle school science classes have difficulty mastering physics concepts such as energy and work, taught in the context of simple machines. Moreover, students' naive conceptions of physics often remain unchanged after completing a science class. To address this problem, I developed an intelligent tutoring system, called the Virtual Physics System (ViPS), which coaches students through problem solving with one class of simple machines, pulley systems. The tutor uses a unique cognitive based approach to teaching simple machines, and includes innovations in three areas. (1) It employs a teaching strategy that focuses on highlighting links among concepts of the domain that are essential for conceptual understanding yet are seldom learned by students. (2) Concepts are taught through a combination of effective human tutoring techniques (e.g., hinting) and simulations. (3) For each student, the system identifies which misconceptions he or she has, from a common set of student misconceptions gathered from domain experts, and tailors tutoring to match the correct line of scientific reasoning regarding the misconceptions. ViPS was implemented as a platform on which students can design and simulate pulley system experiments, integrated with a constraint-based tutor that intervenes when students make errors during problem solving to teach them and to help them. ViPS has a web-based client-server architecture, and has been implemented using Java technologies. ViPS is different from existing physics simulations and tutoring systems due to several original features. (1). It is the first system to integrate a simulation based virtual experimentation platform with an intelligent tutoring component. (2) It uses a novel approach, based on Bayesian networks, to help students construct correct pulley systems for experimental simulation. (3) It identifies student misconceptions based on a novel decision tree applied to student pretest scores, and tailors tutoring to individual students based on detected misconceptions. ViPS has been evaluated through usability and usefulness experiments with undergraduate engineering students taking their first college-level engineering physics course and undergraduate pre-service teachers taking their first college-level physics course. These experiments demonstrated that ViPS is highly usable and effective. Students using ViPS reduced their misconceptions, and students conducting virtual experiments in ViPS learned more than students who conducted experiments with physical pulley systems. Interestingly, it was also found that college students exhibited many of the same misconceptions that have been identified in middle school students.

  17. A Simple Way of Modeling the Expansion of the Universe: What Does Light Tell Us?

    ERIC Educational Resources Information Center

    Coban, Gul Unal; Sengoren, Serap Kaya

    2011-01-01

    The purpose of this activity is to model the expansion of the universe by investigating the behavior of water waves. It is designed for students in the upper grades of physics and physical science who are learning about the wave nature of light and are ready to discover such important questions about science. The article explains first the Doppler…

  18. Simple Spectral Lines Data Model Version 1.0

    NASA Astrophysics Data System (ADS)

    Osuna, Pedro; Salgado, Jesus; Guainazzi, Matteo; Dubernet, Marie-Lise; Roueff, Evelyne; Osuna, Pedro; Salgado, Jesus

    2010-12-01

    This document presents a Data Model to describe Spectral Line Transitions in the context of the Simple Line Access Protocol defined by the IVOA (c.f. Ref[13] IVOA Simple Line Access protocol) The main objective of the model is to integrate with and support the Simple Line Access Protocol, with which it forms a compact unit. This integration allows seamless access to Spectral Line Transitions available worldwide in the VO context. This model does not provide a complete description of Atomic and Molecular Physics, which scope is outside of this document. In the astrophysical sense, a line is considered as the result of a transition between two energy levels. Under the basis of this assumption, a whole set of objects and attributes have been derived to define properly the necessary information to describe lines appearing in astrophysical contexts. The document has been written taking into account available information from many different Line data providers (see acknowledgments section).

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.

    Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  20. Fourier Spectroscopy: A Simple Analysis Technique

    ERIC Educational Resources Information Center

    Oelfke, William C.

    1975-01-01

    Presents a simple method of analysis in which the student can integrate, point by point, any interferogram to obtain its Fourier transform. The manual technique requires no special equipment and is based on relationships that most undergraduate physics students can derive from the Fourier integral equations. (Author/MLH)

  1. Physically-Based Assessment of Intrinsic Groundwater Resource Vulnerability in AN Urban Catchment

    NASA Astrophysics Data System (ADS)

    Graf, T.; Therrien, R.; Lemieux, J.; Molson, J. W.

    2013-12-01

    Several methods exist to assess intrinsic groundwater (re)source vulnerability for the purpose of sustainable groundwater management and protection. However, several methods are empirical and limited in their application to specific types of hydrogeological systems. Recent studies suggest that a physically-based approach could be better suited to provide a general, conceptual and operational basis for groundwater vulnerability assessment. A novel method for physically-based assessment of intrinsic aquifer vulnerability is currently under development and tested to explore the potential of an integrated modelling approach, combining groundwater travel time probability and future scenario modelling in conjunction with the fully integrated HydroGeoSphere model. To determine the intrinsic groundwater resource vulnerability, a fully coupled 2D surface water and 3D variably-saturated groundwater flow model in conjunction with a 3D geological model (GoCAD) has been developed for a case study of the Rivière Saint-Charles (Québec/Canada) regional scale, urban watershed. The model has been calibrated under transient flow conditions for the hydrogeological, variably-saturated subsurface system, coupled with the overland flow zone by taking into account monthly recharge variation and evapotranspiration. To better determine the intrinsic groundwater vulnerability, two independent approaches are considered and subsequently combined in a simple, holistic multi-criteria-decision analyse. Most data for the model comes from an extensive hydrogeological database for the watershed, whereas data gaps have been complemented via field tests and literature review. The subsurface is composed of nine hydrofacies, ranging from unconsolidated fluvioglacial sediments to low permeability bedrock. The overland flow zone is divided into five major zones (Urban, Rural, Forest, River and Lake) to simulate the differences in landuse, whereas the unsaturated zone is represented via the model integrated Van-Genuchten function. The model setup and optimisation turn out to be the most challenging part because of the non-trivial nature (due to the highly non-linear PDEs) of the coupling procedure between the surface and subsurface domain, while keeping realistic parameter ranges and obtaining realistic simulation results in both domains. The model calibration is based on water level monitoring as well as daily mean river discharge measurement at different gauge stations within the catchment. It is intended to create multiple model outcomes for the numerical modelling of the groundwater vulnerability to take into account uncertainty due to the model input data. The next step of the overall vulnerability assessment consists in modelling future vulnerability scenario(s), applying realistic changes to the model by using PEST with SENSAN for subsequent sensitivity analysis. The PEST model could also potentially be used for a model recalibration as a function of the model parameters sensitivity (simple perturbation method). Preliminary results showing a good fit between the observed and simulated water levels and hydrographs. However the simulated water depth at the overland flow domain as well as the simulated saturation distribution in the porous media domain are still showing room for improvement of the numerical model.

  2. Colloid Transport in Saturated Porous Media: Elimination of Attachment Efficiency in a New Colloid Transport Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.

    A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less

  3. Development of a near-wall Reynolds-stress closure based on the SSG model for the pressure strain

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Aksoy, H.; Sommer, T. P.; Yuan, S. P.

    1994-01-01

    In this research, a near-wall second-order closure based on the Speziable et al.(1991) or SSG model for the pressure-strain term is proposed. Unlike the LRR model, the SSG model is quasi-nonlinear and yields better results when applied to calculate rotating homogeneous turbulent flows. An asymptotic analysis near the wall is applied to both the exact and modeled, equations so that appropriate near-wall corrections to the SSG model and the modeled dissipation-rate equation can be derived to satisfy the physical wall boundary conditions as well as the asymptotic near-wall behavior of the exact equations. Two additional model constants are introduced and they are determined by calibrating against one set of near-wall channel flow data. Once determined, their values are found to remain constant irrespective of the type of flow examined. The resultant model is used to calculate simple turbulent flows, near separating turbulent flows, complex turbulent flows and compressible turbulent flows with a freestream Mach number as high as 10. In all the flow cases investigated, the calculated results are in good agreement with data. This new near-wall model is less ad hoc, physically and mathematically more sound and eliminates the empiricism introduced by Zhang. Therefore, it is quite general, as demonstrated by the good agreement achieved with measurements covering a wide range of Reynolds numbers and Mach numbers.

  4. Testing physical models for dipolar asymmetry with CMB polarization

    NASA Astrophysics Data System (ADS)

    Contreras, D.; Zibin, J. P.; Scott, D.; Banday, A. J.; Górski, K. M.

    2017-12-01

    The cosmic microwave background (CMB) temperature anisotropies exhibit a large-scale dipolar power asymmetry. To determine whether this is due to a real, physical modulation or is simply a large statistical fluctuation requires the measurement of new modes. Here we forecast how well CMB polarization data from Planck and future experiments will be able to confirm or constrain physical models for modulation. Fitting several such models to the Planck temperature data allows us to provide predictions for polarization asymmetry. While for some models and parameters Planck polarization will decrease error bars on the modulation amplitude by only a small percentage, we show, importantly, that cosmic-variance-limited (and in some cases even Planck) polarization data can decrease the errors by considerably better than the expectation of √{2 } based on simple ℓ-space arguments. We project that if the primordial fluctuations are truly modulated (with parameters as indicated by Planck temperature data) then Planck will be able to make a 2 σ detection of the modulation model with 20%-75% probability, increasing to 45%-99% when cosmic-variance-limited polarization is considered. We stress that these results are quite model dependent. Cosmic variance in temperature is important: combining statistically isotropic polarization with temperature data will spuriously increase the significance of the temperature signal with 30% probability for Planck.

  5. Simple model of hydrophobic hydration.

    PubMed

    Lukšič, Miha; Urbic, Tomaz; Hribar-Lee, Barbara; Dill, Ken A

    2012-05-31

    Water is an unusual liquid in its solvation properties. Here, we model the process of transferring a nonpolar solute into water. Our goal was to capture the physical balance between water's hydrogen bonding and van der Waals interactions in a model that is simple enough to be nearly analytical and not heavily computational. We develop a 2-dimensional Mercedes-Benz-like model of water with which we compute the free energy, enthalpy, entropy, and the heat capacity of transfer as a function of temperature, pressure, and solute size. As validation, we find that this model gives the same trends as Monte Carlo simulations of the underlying 2D model and gives qualitative agreement with experiments. The advantages of this model are that it gives simple insights and that computational time is negligible. It may provide a useful starting point for developing more efficient and more realistic 3D models of aqueous solvation.

  6. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.

    PubMed

    Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  7. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect, reservoirs and flows, albedo feedback, Snowball Earth, climate sensitivity, and model experiment design. Climate calculations are extended to Mars with some modifications to the Earth climate component, and could be used in lessons about the Mars atmosphere, and exploring scenarios of Mars climate history.

  8. Modelling students' knowledge organisation: Genealogical conceptual networks

    NASA Astrophysics Data System (ADS)

    Koponen, Ismo T.; Nousiainen, Maija

    2018-04-01

    Learning scientific knowledge is largely based on understanding what are its key concepts and how they are related. The relational structure of concepts also affects how concepts are introduced in teaching scientific knowledge. We model here how students organise their knowledge when they represent their understanding of how physics concepts are related. The model is based on assumptions that students use simple basic linking-motifs in introducing new concepts and mostly relate them to concepts that were introduced a few steps earlier, i.e. following a genealogical ordering. The resulting genealogical networks have relatively high local clustering coefficients of nodes but otherwise resemble networks obtained with an identical degree distribution of nodes but with random linking between them (i.e. the configuration-model). However, a few key nodes having a special structural role emerge and these nodes have a higher than average communicability betweenness centralities. These features agree with the empirically found properties of students' concept networks.

  9. Phenomenological model for transient deformation based on state variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, M S; Cho, C W; Alexopoulos, P

    The state variable theory of Hart, while providing a unified description of plasticity-dominated deformation, exhibits deficiencies when it is applied to transient deformation phenomena at stresses below yield. It appears that the description of stored anelastic strain is oversimplified. Consideration of a simple physical picture based on continuum dislocation pileups suggests that the neglect of weak barriers to dislocation motion is the source of these inadequacies. An appropriately modified description incorporating such barriers then allows the construction of a macroscopic model including transient effects. Although the flow relations for the microplastic element required in the new theory are not known,more » tentative assignments may be made for such functions. The model then exhibits qualitatively correct behavior when tensile, loading-unloading, reverse loading, and load relaxation tests are simulated. Experimental procedures are described for determining the unknown parameters and functions in the new model.« less

  10. The galaxy clustering crisis in abundance matching

    NASA Astrophysics Data System (ADS)

    Campbell, Duncan; van den Bosch, Frank C.; Padmanabhan, Nikhil; Mao, Yao-Yuan; Zentner, Andrew R.; Lange, Johannes U.; Jiang, Fangzhou; Villarreal, Antonio

    2018-06-01

    Galaxy clustering on small scales is significantly underpredicted by sub-halo abundance matching (SHAM) models that populate (sub-)haloes with galaxies based on peak halo mass, Mpeak. SHAM models based on the peak maximum circular velocity, Vpeak, have had much better success. The primary reason for Mpeak-based models fail is the relatively low abundance of satellite galaxies produced in these models compared to those based on Vpeak. Despite success in predicting clustering, a simple Vpeak-based SHAM model results in predictions for galaxy growth that are at odds with observations. We evaluate three possible remedies that could `save' mass-based SHAM: (1) SHAM models require a significant population of `orphan' galaxies as a result of artificial disruption/merging of sub-haloes in modern high-resolution dark matter simulations; (2) satellites must grow significantly after their accretion; and (3) stellar mass is significantly affected by halo assembly history. No solution is entirely satisfactory. However, regardless of the particulars, we show that popular SHAM models based on Mpeak cannot be complete physical models as presented. Either Vpeak truly is a better predictor of stellar mass at z ˜ 0 and it remains to be seen how the correlation between stellar mass and Vpeak comes about, or SHAM models are missing vital component(s) that significantly affect galaxy clustering.

  11. Measurement of Muon Neutrino Quasielastic Scattering on Carbon

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.

    2008-01-01

    The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.

  12. Approximate method for calculating convective heat flux on the surface of bodies of simple geometric shapes

    NASA Astrophysics Data System (ADS)

    Kuzenov, V. V.; Ryzhkov, S. V.

    2017-02-01

    The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.

  13. Opportunities for Undergraduates to Engage in Research Using Seismic Data and Data Products

    NASA Astrophysics Data System (ADS)

    Taber, J. J.; Hubenthal, M.; Benoit, M. H.

    2014-12-01

    Introductory Earth science classes can become more interactive through the use of a range of seismic data and models that are available online, which students can use to conduct simple research regarding earthquakes and earth structure. One way to introduce students to these data sets is via a new set of six intro-level classroom activities designed to introduce undergraduates to some of the grand challenges in seismology research. The activities all use real data sets and some require students to collect their own data, either using physical models or via Web sites and Web applications. While the activities are designed to step students through a learning sequence, several of the activities are open-ended and can be expanded to research topics. For example, collecting and analyzing data from a deceptively simple physical model of earthquake behavior can lead students to query a map-based seismicity catalog via the IRIS Earthquake Browser to study seismicity rates and the distribution of earthquake magnitudes, and make predictions about the earthquake hazards in regions of their choosing. In another activity, students can pose their own questions and reach conclusions regarding the correlation between hydraulic fracturing, waste water disposal, and earthquakes. Other data sources are available for students to engage in self-directed research projects. For students with an interest in instrumentation, they can conduct research relating to instrument calibration and sensitivity using a simple educational seismometer. More advanced students can explore tomographic models of seismic velocity structure, and examine research questions related to earth structure, such as the correlation of topography to crustal thickness, and the fate of subducted slabs. The type of faulting in a region can be explored using a map-based catalog of focal mechanisms, allowing students to analyze the spatial distribution of normal, thrust and strike-slip events in a subduction zone region. For all of these topics and data sets, the societal impact of earthquakes can provide an additional motivation for students to engage in their research. www.iris.edu

  14. A school-based physical activity promotion intervention in children: rationale and study protocol for the PREVIENE Project.

    PubMed

    Tercedor, Pablo; Villa-González, Emilio; Ávila-García, Manuel; Díaz-Piedra, Carolina; Martínez-Baena, Alejandro; Soriano-Maldonado, Alberto; Pérez-López, Isaac José; García-Rodríguez, Inmaculada; Mandic, Sandra; Palomares-Cuadros, Juan; Segura-Jiménez, Víctor; Huertas-Delgado, Francisco Javier

    2017-09-26

    The lack of physical activity and increasing time spent in sedentary behaviours during childhood place importance on developing low cost, easy-toimplement school-based interventions to increase physical activity among children. The PREVIENE Project will evaluate the effectiveness of five innovative, simple, and feasible interventions (active commuting to/from school, active Physical Education lessons, active school recess, sleep health promotion, and an integrated program incorporating all 4 interventions) to improve physical activity, fitness, anthropometry, sleep health, academic achievement, and health-related quality of life in primary school children. A total of 300 children (grade 3; 8-9 years of age) from six schools in Granada (Spain) will be enrolled in one of the 8-week interventions (one intervention per school; 50 children per school) or a control group (no intervention school; 50 children). Outcomes will include physical activity (measured by accelerometry), physical fitness (assessed using the ALPHA fitness battery), and anthropometry (height, weight and waist circumference). Furthermore, they will include sleep health (measured by accelerometers, a sleep diary, and sleep health questionnaires), academic achievement (grades from the official school's records), and health-related quality of life (child and parental questionnaires). To assess the effectiveness of the different interventions on objectively measured PA and the other outcomes, the generalized linear model will be used. The PREVIENE Project will provide the information about the effectiveness and implementation of different school-based interventions for physical activity promotion in primary school children.

  15. Applications of statistical and atomic physics to the spectral line broadening and stock markets

    NASA Astrophysics Data System (ADS)

    Volodko, Dmitriy

    The purpose of this investigation is the application of time correlation function methodology on the theoretical research of the shift of hydrogen and hydrogen-like spectral lines due to electrons and ions interaction with the spectral line emitters-dipole ionic-electronic shift (DIES) and the describing a behavior of stock-market in terms of a simple physical model simulation which obeys Levy statistical distribution---the same as that of the real stock-market index. Using Generalized Theory of Stark broadening of electrons in plasma we discovered a new source of the shift of hydrogen and hydrogen-like spectral lines that we called a dipole ionic-electronic shift (DIES). This shift results from the indirect coupling of electron and ion microfields in plasmas which is facilitated by the radiating atom/ion. We have shown that the DIES, unlike all previously known shifts, is highly nonlinear and has a different sign for different ranges of plasma parameters. The most favorable conditions for observing the DIES correspond to plasmas of high densities, but of relatively low temperature. For the Balmer-alpha line of hydrogen with the most favorable observational conditions Ne > 1018 cm-3, T < 2 eV, the DIES has been already confirmed experimentally. Based on the study of the time correlations and of the probability distribution of fluctuations in the stock market, we developed a relatively simple physical model, which simulates the Dow Jones Industrials index and makes short-term (a couple of days) predictions of its trend.

  16. Definitions: Health, Fitness, and Physical Activity.

    ERIC Educational Resources Information Center

    Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don

    2000-01-01

    This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…

  17. DOING Physics--Physics Activities for Groups.

    ERIC Educational Resources Information Center

    Zwicker, Earl, Ed.

    1985-01-01

    Students are challenged to investigate a simple electric motor and to build their own model from a battery, wood block, clips, enameled copper wire, bare wire, and sandpaper. Through trial and error, several discoveries are made, including a substitute commutator and use of a radio to detect motor armature contact changes. (DH)

  18. A simple geometrical model describing shapes of soap films suspended on two rings

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  19. A conformally flat realistic anisotropic model for a compact star

    NASA Astrophysics Data System (ADS)

    Ivanov, B. V.

    2018-04-01

    A physically realistic stellar model with a simple expression for the energy density and conformally flat interior is found. The relations between the different conditions are used without graphic proofs. It may represent a real pulsar.

  20. IMAGINE: Interstellar MAGnetic field INference Engine

    NASA Astrophysics Data System (ADS)

    Steininger, Theo

    2018-03-01

    IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

  1. Can physics help to explain embryonic development? An overview.

    PubMed

    Fleury, V

    2013-10-01

    Recent technical advances including digital imaging and particle image velocimetry can be used to extract the full range of embryonic movements that constitute the instantaneous 'morphogenetic fields' of a developing animal. The final shape of the animal results from the sum over time (integral) of the movements that make up the velocity fields of all the tissue constituents. In vivo microscopy can be used to capture the details of vertebrate development at the earliest embryonic stages. The movements thus observed can be quantitatively compared to physical models that provide velocity fields based on simple hypotheses about the nature of living matter (a visco-elastic gel). This approach has cast new light on the interpretation of embryonic movement, folding, and organisation. It has established that several major discontinuities in development are simple physical changes in boundary conditions. In other words, with no change in biology, the physical consequences of collisions between folds largely explain the morphogenesis of the major structures (such as the head). Other discontinuities result from changes in physical conditions, such as bifurcations (changes in physical behaviour beyond specific yield points). For instance, beyond a certain level of stress, a tissue folds, without any new gene being involved. An understanding of the physical features of movement provides insights into the levers that drive evolution; the origin of animals is seen more clearly when viewed under the light of the fundamental physical laws (Newton's principle, action-reaction law, changes in symmetry breaking scale). This article describes the genesis of a vertebrate embryo from the shapeless stage (round mass of tissue) to the development of a small, elongated, bilaterally symmetric structure containing vertebral precursors, hip and shoulder enlarges, and a head. Copyright © 2013. Published by Elsevier Masson SAS.

  2. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  3. Evaluating CONUS-Scale Runoff Simulation across the National Water Model WRF-Hydro Implementation to Disentangle Regional Controls on Streamflow Generation and Model Error Contribution

    NASA Astrophysics Data System (ADS)

    Dugger, A. L.; Rafieeinasab, A.; Gochis, D.; Yu, W.; McCreight, J. L.; Karsten, L. R.; Pan, L.; Zhang, Y.; Sampson, K. M.; Cosgrove, B.

    2016-12-01

    Evaluation of physically-based hydrologic models applied across large regions can provide insight into dominant controls on runoff generation and how these controls vary based on climatic, biological, and geophysical setting. To make this leap, however, we need to combine knowledge of regional forcing skill, model parameter and physics assumptions, and hydrologic theory. If we can successfully do this, we also gain information on how well our current approximations of these dominant physical processes are represented in continental-scale models. In this study, we apply this diagnostic approach to a 5-year retrospective implementation of the WRF-Hydro community model configured for the U.S. National Weather Service's National Water Model (NWM). The NWM is a water prediction model in operations over the contiguous U.S. as of summer 2016, providing real-time estimates and forecasts out to 30 days of streamflow across 2.7 million stream reaches as well as distributed snowpack, soil moisture, and evapotranspiration at 1-km resolution. The WRF-Hydro system permits not only the standard simulation of vertical energy and water fluxes common in continental-scale models, but augments these processes with lateral redistribution of surface and subsurface water, simple groundwater dynamics, and channel routing. We evaluate 5 years of NLDAS-2 precipitation forcing and WRF-Hydro streamflow and evapotranspiration simulation across the contiguous U.S. at a range of spatial (gage, basin, ecoregion) and temporal (hourly, daily, monthly) scales and look for consistencies and inconsistencies in performance in terms of bias, timing, and extremes. Leveraging results from other CONUS-scale hydrologic evaluation studies, we translate our performance metrics into a matrix of likely dominant process controls and error sources (forcings, parameter estimates, and model physics). We test our hypotheses in a series of controlled model experiments on a subset of representative basins from distinct "problem" environments (Southeast U.S. Coastal Plain, Central and Coastal Texas, Northern Plains, and Arid Southwest). The results from these longer-term model diagnostics will inform future improvements in forcing bias correction, parameter calibration, and physics developments in the National Water Model.

  4. Is the negative glow plasma of a direct current glow discharge negatively charged?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdanov, E. A.; Saifutdinov, A. I.; Demidov, V. I., E-mail: Vladimir.Demidov@mail.wvu.edu

    A classic problem in gas discharge physics is discussed: what is the sign of charge density in the negative glow region of a glow discharge? It is shown that traditional interpretations in text-books on gas discharge physics that states a negative charge of the negative glow plasma are based on analogies with a simple one-dimensional model of discharge. Because the real glow discharges with a positive column are always two-dimensional, the transversal (radial) term in divergence with the electric field can provide a non-monotonic axial profile of charge density in the plasma, while maintaining a positive sign. The numerical calculationmore » of glow discharge is presented, showing a positive space charge in the negative glow under conditions, where a one-dimensional model of the discharge would predict a negative space charge.« less

  5. B-physics anomalies: a guide to combined explanations

    NASA Astrophysics Data System (ADS)

    Buttazzo, Dario; Greljo, Admir; Isidori, Gino; Marzocca, David

    2017-11-01

    Motivated by additional experimental hints of Lepton Flavour Universality violation in B decays, both in charged- and in neutral-current processes, we analyse the ingredients necessary to provide a combined description of these phenomena. By means of an Effective Field Theory (EFT) approach, based on the hypothesis of New Physics coupled predominantly to the third generation of left-handed quarks and leptons, we show how this is possible. We demonstrate, in particular, how to solve the problems posed by electroweak precision tests and direct searches with a rather natural choice of model parameters, within the context of a U(2) q ×U(2)ℓ flavour symmetry. We further exemplify the general EFT findings by means of simplified models with explicit mediators in the TeV range: coloured scalar or vector leptoquarks and colour-less vectors. Among these, the case of an SU(2) L -singlet vector leptoquark emerges as a particularly simple and successful framework.

  6. Heat transfer from nanoparticles: a corresponding state analysis.

    PubMed

    Merabia, Samy; Shenogin, Sergei; Joly, Laurent; Keblinski, Pawel; Barrat, Jean-Louis

    2009-09-08

    In this contribution, we study situations in which nanoparticles in a fluid are strongly heated, generating high heat fluxes. This situation is relevant to experiments in which a fluid is locally heated by using selective absorption of radiation by solid particles. We first study this situation for different types of molecular interactions, using models for gold particles suspended in octane and in water. As already reported in experiments, very high heat fluxes and temperature elevations (leading eventually to particle destruction) can be observed in such situations. We show that a very simple modeling based on Lennard-Jones (LJ) interactions captures the essential features of such experiments and that the results for various liquids can be mapped onto the LJ case, provided a physically justified (corresponding state) choice of parameters is made. Physically, the possibility of sustaining very high heat fluxes is related to the strong curvature of the interface that inhibits the formation of an insulating vapor film.

  7. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  8. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  9. Revising Hydrology of a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Le Vine, Nataliya; Butler, Adrian; McIntyre, Neil; Jackson, Christopher

    2015-04-01

    Land Surface Models (LSMs) are key elements in guiding adaptation to the changing water cycle and the starting points to develop a global hyper-resolution model of the terrestrial water, energy and biogeochemical cycles. However, before this potential is realised, there are some fundamental limitations of LSMs related to how meaningfully hydrological fluxes and stores are represented. An important limitation is the simplistic or non-existent representation of the deep subsurface in LSMs; and another is the lack of connection of LSM parameterisations to relevant hydrological information. In this context, the paper uses a case study of the JULES (Joint UK Land Environmental Simulator) LSM applied to the Kennet region in Southern England. The paper explores the assumptions behind JULES hydrology, adapts the model structure and optimises the coupling with the ZOOMQ3D regional groundwater model. The analysis illustrates how three types of information can be used to improve the model's hydrology: a) observations, b) regionalized information, and c) information from an independent physics-based model. It is found that: 1) coupling to the groundwater model allows realistic simulation of streamflows; 2) a simple dynamic lower boundary improves upon JULES' stationary unit gradient condition; 3) a 1D vertical flow in the unsaturated zone is sufficient; however there is benefit in introducing a simple dual soil moisture retention curve; 4) regionalized information can be used to describe soil spatial heterogeneity. It is concluded that relatively simple refinements to the hydrology of JULES and its parameterisation method can provide a substantial step forward in realising its potential as a high-resolution multi-purpose model.

  10. A-Priori Tuning of Modified Magnussen Combustion Model

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    2016-01-01

    In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.

  11. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  12. Deformation and Fabric in Compacted Clay Soils

    NASA Astrophysics Data System (ADS)

    Wensrich, C. M.; Pineda, J.; Luzin, V.; Suwal, L.; Kisi, E. H.; Allameh-Haery, H.

    2018-05-01

    Hydromechanical anisotropy of clay soils in response to deformation or deposition history is related to the micromechanics of platelike clay particles and their orientations. In this article, we examine the relationship between microstructure, deformation, and moisture content in kaolin clay using a technique based on neutron scattering. This technique allows for the direct characterization of microstructure within representative samples using traditional measures such as orientation density and soil fabric tensor. From this information, evidence for a simple relationship between components of the deviatoric strain tensor and the deviatoric fabric tensor emerge. This relationship may provide a physical basis for future anisotropic constitutive models based on the micromechanics of these materials.

  13. A formulation of multidimensional growth models for the assessment and forecast of technology attributes

    NASA Astrophysics Data System (ADS)

    Danner, Travis W.

    Developing technology systems requires all manner of investment---engineering talent, prototypes, test facilities, and more. Even for simple design problems the investment can be substantial; for complex technology systems, the development costs can be staggering. The profitability of a corporation in a technology-driven industry is crucially dependent on maximizing the effectiveness of research and development investment. Decision-makers charged with allocation of this investment are forced to choose between the further evolution of existing technologies and the pursuit of revolutionary technologies. At risk on the one hand is excessive investment in an evolutionary technology which has only limited availability for further improvement. On the other hand, the pursuit of a revolutionary technology may mean abandoning momentum and the potential for substantial evolutionary improvement resulting from the years of accumulated knowledge. The informed answer to this question, evolutionary or revolutionary, requires knowledge of the expected rate of improvement and the potential a technology offers for further improvement. This research is dedicated to formulating the assessment and forecasting tools necessary to acquire this knowledge. The same physical laws and principles that enable the development and improvement of specific technologies also limit the ultimate capability of those technologies. Researchers have long used this concept as the foundation for modeling technological advancement through extrapolation by analogy to biological growth models. These models are employed to depict technology development as it asymptotically approaches limits established by the fundamental principles on which the technological approach is based. This has proven an effective and accurate approach to modeling and forecasting simple single-attribute technologies. With increased system complexity and the introduction of multiple system objectives, however, the usefulness of this modeling technique begins to diminish. With the introduction of multiple objectives, researchers often abandon technology growth models for scoring models and technology frontiers. While both approaches possess advantages over current growth models for the assessment of multi-objective technologies, each lacks a necessary dimension for comprehensive technology assessment. By collapsing multiple system metrics into a single, non-intuitive technology measure, scoring models provide a succinct framework for multi-objective technology assessment and forecasting. Yet, with no consideration of physical limits, scoring models provide no insight as to the feasibility of a particular combination of system capabilities. They only indicate that a given combination of system capabilities yields a particular score. Conversely, technology frontiers are constructed with the distinct objective of providing insight into the feasibility of system capability combinations. Yet again, upper limits to overall system performance are ignored. Furthermore, the data required to forecast subsequent technology frontiers is often inhibitive. In an attempt to reincorporate the fundamental nature of technology advancement as bound by physical principles, researchers have sought to normalize multi-objective systems whereby the variability of a single system objective is eliminated as a result of changes in the remaining objectives. This drastically limits the applicability of the resulting technology model because it is only applicable for a single setting of all other system attributes. Attempts to maintain the interaction between the growth curves of each technical objective of a complex system have thus far been limited to qualitative and subjective consideration. This research proposes the formulation of multidimensional growth models as an approach to simulating the advancement of multi-objective technologies towards their upper limits. Multidimensional growth models were formulated by noticing and exploiting the correlation between technology growth models and technology frontiers. Both are frontiers in actuality. The technology growth curve is a frontier between capability levels of a single attribute and time, while a technology frontier is a frontier between the capability levels of two or more attributes. Multidimensional growth models are formulated by exploiting the mathematical significance of this correlation. The result is a model that can capture both the interaction between multiple system attributes and their expected rates of improvement over time. The fundamental nature of technology development is maintained, and interdependent growth curves are generated for each system metric with minimal data requirements. Being founded on the basic nature of technology advancement, relative to physical limits, the availability for further improvement can be determined for a single metric relative to other system measures of merit. A by-product of this modeling approach is a single n-dimensional technology frontier linking all n system attributes with time. This provides an environment capable of forecasting future system capability in the form of advancing technology frontiers. The ability of a multidimensional growth model to capture the expected improvement of a specific technological approach is dependent on accurately identifying the physical limitations to each pertinent attribute. This research investigates two potential approaches to identifying those physical limits, a physics-based approach and a regression-based approach. The regression-based approach has found limited acceptance among forecasters, although it does show potential for estimating upper limits with a specified degree of uncertainty. Forecasters have long favored physics-based approaches for establishing the upper limit to unidimensional growth models. The task of accurately identifying upper limits has become increasingly difficult with the extension of growth models into multiple dimensions. A lone researcher may be able to identify the physical limitation to a single attribute of a simple system; however, as system complexity and the number of attributes increases, the attention of researchers from multiple fields of study is required. Thus, limit identification is itself an area of research and development requiring some level of investment. Whether estimated by physics or regression-based approaches, predicted limits will always have some degree of uncertainty. This research takes the approach of quantifying the impact of that uncertainty on model forecasts rather than heavily endorsing a single technique to limit identification. In addition to formulating the multidimensional growth model, this research provides a systematic procedure for applying that model to specific technology architectures. Researchers and decision-makers are able to investigate the potential for additional improvement within that technology architecture and to estimate the expected cost of each incremental improvement relative to the cost of past improvements. In this manner, multidimensional growth models provide the necessary information to set reasonable program goals for the further evolution of a particular technological approach or to establish the need for revolutionary approaches in light of the constraining limits of conventional approaches.

  14. Evaporation estimation of rift valley lakes: comparison of models.

    PubMed

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  15. Learning Physical Science through Astronomy Activities: A Comparison between Constructivist and Traditional Approaches in Grades 3-6

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sadler, Philip M.; Shapiro, Irwin I.

    2008-01-01

    We report on an evaluation of the effectiveness of Project ARIES, an astronomy-based physical science curriculum for upper elementary and middle school children. ARIES students use innovative, simple, and affordable apparatus to carry out a wide range of indoor and outdoor hands-on, discovery-based activities. Student journals and comprehensive…

  16. Physics-based animation of large-scale splashing liquids, elastoplastic solids, and model-reduced flow

    NASA Astrophysics Data System (ADS)

    Gerszewski, Daniel James

    Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present several enhancements to model-reduced fluid simulation that allow improved simulation bases and two-way solid-fluid coupling. Specifically, we present a basis enrichment scheme that allows us to combine data-driven or artistically derived bases with more general analytic bases derived from Laplacian Eigenfunctions. Additionally, we handle two-way solid-fluid coupling in a time-splitting fashion---we alternately timestep the fluid and rigid body simulators, while taking into account the effects of the fluid on the rigid bodies and vice versa. We employ the vortex panel method to handle solid-fluid coupling and use dynamic pressure to compute the effect of the fluid on rigid bodies. Taken together, these contributions have advanced the state-of-the art in physics-based animation and are practical enough to be used in production pipelines.

  17. Mediating relationship of differential products in understanding integration in introductory physics

    NASA Astrophysics Data System (ADS)

    Amos, Nathaniel; Heckler, Andrew F.

    2018-01-01

    In the context of introductory physics, we study student conceptual understanding of differentials, differential products, and integrals and possible pathways to understanding these quantities. We developed a multiple choice conceptual assessment employing a variety of physical contexts probing physical understanding of these three quantities and administered the instrument to over 1000 students in first and second semester introductory physics courses. Using a regression-based mediation analysis with conceptual understanding of integration as the dependent variable, we found evidence consistent with a simple mediation model: the relationship between differentials scores and integral scores may be mediated by the understanding of differential products. The indirect effect (a quantifiable metric of mediation) was estimated as a b =0.29 , 95% CI [0.25, 0.33] for N =1102 Physics 1 students, and a b =0.27 , 95% CI [0.14, 0.48] for N =65 Physics 2 students. We also find evidence that the physical context of the questions can be an important factor. These results imply that for introductory physics courses, instructional emphasis first on differentials then on differential products in a variety of contexts may in turn promote better integral understanding.

  18. Constitutive behavior and fracture toughness properties of the F82H ferritic/martensitic steel

    NASA Astrophysics Data System (ADS)

    Spätig, P.; Odette, G. R.; Donahue, E.; Lucas, G. E.

    2000-12-01

    A detailed investigation of the constitutive behavior of the International Energy Agency (IEA) program heat of 8 Cr unirradiated F82H ferritic-martensitic steel has been undertaken in the temperature range of 80-723 K. The overall tensile flow stress is decomposed into temperature-dependent and athermal yield stress contributions plus a mildly temperature-dependent strain-hardening component. The fitting forms are based on a phenomenological dislocation mechanics model. This formulation provides a more accurate and physically based representation of the flow stress as a function of the key variables of test temperature, strain and stain rate compared to simple power law treatments. Fracture toughness measurements from small compact tension specimens are also reported and analyzed in terms of a critical stress-critical area local fracture model.

  19. Conduit Stability and Collapse in Explosive Volcanic Eruptions: Coupling Conduit Flow and Failure Models

    NASA Astrophysics Data System (ADS)

    Mullet, B.; Segall, P.

    2017-12-01

    Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including cessation of eruption.

  20. A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Cressie, N.; Teixeira, J.

    2010-12-01

    Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.

  1. Rapid flow cytometric measurement of protein inclusions and nuclear trafficking

    PubMed Central

    Whiten, D. R.; San Gil, R.; McAlary, L.; Yerbury, J. J.; Ecroyd, H.; Wilson, M. R.

    2016-01-01

    Proteinaceous cytoplasmic inclusions are an indicator of dysfunction in normal cellular proteostasis and a hallmark of many neurodegenerative diseases. We describe a simple and rapid new flow cytometry-based method to enumerate, characterise and, if desired, physically recover protein inclusions from cells. This technique can analyse and resolve a broad variety of inclusions differing in both size and protein composition, making it applicable to essentially any model of intracellular protein aggregation. The method also allows rapid quantification of the nuclear trafficking of fluorescently labelled molecules. PMID:27516358

  2. Adiabatic gate teleportation.

    PubMed

    Bacon, Dave; Flammia, Steven T

    2009-09-18

    The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.

  3. Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.

    PubMed

    García-Patrón, Raúl; Cerf, Nicolas J

    2006-11-10

    A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].

  4. Coplanar waveguide metamaterials: The role of bandwidth modifying slots

    NASA Astrophysics Data System (ADS)

    Ibraheem, Ibraheem A.; Koch, Martin

    2007-09-01

    The authors propose a coplanar waveguide stopband metasurface based on the Babinet principle. The resulting layout is a compact planar metal structure with complementary split ring resonators, which exhibits a high rejection stop band. The complementary rings provide a frequency band with an effective negative dielectric permittivity. Moreover, the rejected bandwidth can be expanded by introducing slots close to the rings. The authors provide a simple physical model which explains the impact of the slots. Simulations confirm the expected behavior and are in excellent agreement with the measurements.

  5. Suggested Courseware for the Non-Calculus Physics Student: Simple Harmonic Motion, Wave Motion, and Sound.

    ERIC Educational Resources Information Center

    Grable-Wallace, Lisa; And Others

    1989-01-01

    Evaluates 5 courseware packages covering the topics of simple harmonic motion, 7 packages for wave motion, and 10 packages for sound. Discusses the price range, sub-topics, program type, interaction, time, calculus required, graphics, and comments of each courseware. Selects several packages based on the criteria. (YP)

  6. A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof

    NASA Astrophysics Data System (ADS)

    Sinha, Ashok

    2016-03-01

    An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.

  7. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE PAGES

    Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.; ...

    2016-04-14

    Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  8. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  9. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  10. Fish robotics and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lauder, George

    2010-11-01

    Studying the fluid dynamics of locomotion in freely-swimming fishes is challenging due to difficulties in controlling fish behavior. To provide better control over fish-like propulsive systems we have constructed a variety of fish-like robotic test platforms that range from highly biomimetic models of fins, to simple physical models of body movements during aquatic locomotion. First, we have constructed a series of biorobotic models of fish pectoral fins with 5 fin rays that allow detailed study of fin motion, forces, and fluid dynamics associated with fin-based locomotion. We find that by tuning fin ray stiffness and the imposed motion program we can produce thrust both on the fin outstroke and instroke. Second, we are using a robotic flapping foil system to study the self-propulsion of flexible plastic foils of varying stiffness, length, and trailing edge shape as a means of investigating the fluid dynamic effect of simple changes in the properties of undulating bodies moving through water. We find unexpected non-linear stiffness-dependent effects of changing foil length on self-propelled speed, and as well as significant effects of trailing edge shape on foil swimming speed.

  11. Physical Modeling in the Geological Sciences: An Annotated Bibliography. CEGS Programs Publication No. 16.

    ERIC Educational Resources Information Center

    Charlesworth, L. J., Jr.; Passero, Richard Nicholas

    The bibliography identifies, describes, and evaluates devices and techniques discussed in the world's literature to demonstrate or stimulate natural physical geologic phenomena in classroom or laboratory teaching or research situations. The aparatus involved ranges from the very simple and elementary to the highly complex, sophisticated, and…

  12. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Lorentz Trial Function for the Hydrogen Atom: A Simple, Elegant Exercise

    ERIC Educational Resources Information Center

    Sommerfeld, Thomas

    2011-01-01

    The quantum semester of a typical two-semester physical chemistry course is divided into two parts. The initial focus is on quantum mechanics and simple model systems for which the Schrodinger equation can be solved in closed form, but it then shifts in the second half to atoms and molecules, for which no closed solutions exist. The underlying…

  14. Extracting material response from simple mechanical tests on hardening-softening-hardening viscoplastic solids

    NASA Astrophysics Data System (ADS)

    Mohan, Nisha

    Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches. A fundamental question that motivates the modeling of foams is `how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?' A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,"Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes," J. Mech.Phys. Solids, 59, pp. 2227--2237, Erratum 60, 1753-1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like 1) The initial linear elastic response. 2) One or more nonlinear instabilities, yielding, and hardening. The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.

  15. Progress in modeling atmospheric propagation of sonic booms

    NASA Technical Reports Server (NTRS)

    Pierce, Allan D.

    1994-01-01

    The improved simulation of sonic boom propagation through the real atmosphere requires greater understanding of how the transient acoustic pulses popularly termed sonic booms are affected by humidity and turbulence. A realistic atmosphere is invariably somewhat turbulent, and may be characterized by an ambient fluid velocity v and sound speed c that vary from point to point. The absolute humidity will also vary from point to point, although possibly not as irregularly. What is ideally desired is a relatively simple scheme for predicting the probable spreads in key sonic boom signature parameters. Such parameters could be peak amplitudes, rise times, or gross quantities obtainable by signal processing that correlate well with annoyance or damage potential. The practical desire for the prediction scheme is that it require a relatively small amount of knowledge, possibly of a statistical nature, concerning the atmosphere along, the propagation path from the aircraft to the ground. The impact of such a scheme, if developed, implemented, and verified, would be that it would give the persons who make planning decisions a tool for assessing the magnitude of environmental problems that might result from any given overflight or sequence of overflights. The technical approach that has been followed by the author and some of his colleagues is to formulate a hierarchy of simple approximate models based on fundamental physical principles and then to test these models against existing data. For propagation of sonic booms and of other types of acoustic pulses in nonturbulent model atmospheres, there exists a basic overall theoretical model that has evolved as an outgrowth of geometrical acoustics. This theoretical model depicts the sound as propagating within ray tubes in a manner analogous to sound in a waveguide of slowly varying cross-section. Propagation along the ray tube is quasi-one-dimensional, and a wave equation for unidirectional wave propagation is used. A nonlinear term is added to this equation to account for nonlinear steepening, and the formulation has been carried through to allow for spatially varying sound speed, ambient density, and ambient wind velocities. The model intrinsically neglects diffraction, so it cannot take into account what has previously been mentioned in the literature as possibly important mechanisms for turbulence-related distortion. The model as originally developed could predict an idealized N-waveform which often agrees with data in terms of peak amplitude and overall positive phase duration. It is possible, moreover, to develop simple methods based on the physics of relaxation processes for incorporating molecular relaxation into the quasi-one-dimensional model of nonlinear propagation along ray tubes.

  16. Branson: A Mini-App for Studying Parallel IMC, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Alex

    This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less

  17. Periodic table-based descriptors to encode cytotoxicity profile of metal oxide nanoparticles: a mechanistic QSTR approach.

    PubMed

    Kar, Supratik; Gajewicz, Agnieszka; Puzyn, Tomasz; Roy, Kunal; Leszczynski, Jerzy

    2014-09-01

    Nanotechnology has evolved as a frontrunner in the development of modern science. Current studies have established toxicity of some nanoparticles to human and environment. Lack of sufficient data and low adequacy of experimental protocols hinder comprehensive risk assessment of nanoparticles (NPs). In the present work, metal electronegativity (χ), the charge of the metal cation corresponding to a given oxide (χox), atomic number and valence electron number of the metal have been used as simple molecular descriptors to build up quantitative structure-toxicity relationship (QSTR) models for prediction of cytotoxicity of metal oxide NPs to bacteria Escherichia coli. These descriptors can be easily obtained from molecular formula and information acquired from periodic table in no time. It has been shown that a simple molecular descriptor χox can efficiently encode cytotoxicity of metal oxides leading to models with high statistical quality as well as interpretability. Based on this model and previously published experimental results, we have hypothesized the most probable mechanism of the cytotoxicity of metal oxide nanoparticles to E. coli. Moreover, the required information for descriptor calculation is independent of size range of NPs, nullifying a significant problem that various physical properties of NPs change for different size ranges. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph

    2017-04-01

    The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.

  19. The effect of shape on drag: a physics exercise inspired by biology

    NASA Astrophysics Data System (ADS)

    Fingerut, Jonathan; Johnson, Nicholas; Mongeau, Eric; Habdas, Piotr

    2017-07-01

    As part of a biomechanics course aimed at upper-division biology and physics majors, but applicable to a range of student learning levels, this laboratory exercise provides an insight into the effect of shape on hydrodynamic performance, as well an introduction to computer aided design (CAD) and 3D printing. Students use hydrodynamic modeling software and simple CAD programs to design a shape with the least amount of drag based on strategies gleaned from the study of natural forms. Students then print the shapes using a 3D printer and test their shapes against their classmates in a friendly competition. From this exercise, students gain a more intuitive sense of the challenges that organisms face when moving through fluid environments, the physical phenomena involved in moving through fluids at high Reynolds numbers and observe how and why certain morphologies, such as streamlining, are common answers to the challenge of swimming at high speeds.

  20. Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator

    PubMed Central

    Moses, Matthew S.; Murphy, Ryan J.; Kutzer, Michael D. M.; Armand, Mehran

    2016-01-01

    This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy. PMID:27818607

  1. Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator.

    PubMed

    Moses, Matthew S; Murphy, Ryan J; Kutzer, Michael D M; Armand, Mehran

    2015-12-01

    This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy.

  2. Microscale models of partially molten rocks and their macroscale physical properties

    NASA Astrophysics Data System (ADS)

    Rudge, J. F.

    2017-12-01

    Any geodynamical model of melt transport in the Earth's mantle requires constitutive laws for the rheology of partially molten rock. These constitutive laws are poorly known, and one way to make progress in our understanding is through the upscaling of microscale models which describe physics at the scale of individual mineral grains. Crucially, many upscaled physical properties (such as permeability) depend not only on how much melt is present, but on how that melt is arranged at the microscale; i.e. on the geometry of the melt network. Here I will present some new calculations of equilibrium melt network geometries around idealised tetrakaidecahedral grains. In contrast to several previous calculations of textural equilibrium, these calculations allow for a both a liquid-phase and a solid-phase topology that can tile 3D space. The calculations are based on a simple minimisation of surface energy using the finite element method. In these simple models just two parameters control the topology of the melt network: the porosity (volume fraction of melt), and the dihedral angle. The consquences of these melt geometries for upscaled properties such as permeability; electrical conductivity; and importantly, effective viscosity will be explored. Recent theoretical work [1,2] has suggested that in diffusion creep a small amount of melt may dramatically reduce the effective shear viscosity of a partially molten rock, with profound consequences for the nature of the asthenosphere. This contribution will show that this reduction in viscosity may have been significantly overestimated, so that the drop in the effective viscosity at onset of melting is more modest. [1] Takei, Y., and B. K. Holtzman (2009), Viscous constitutive relations of solid-liquid composites in terms of grain boundary contiguity: 1. Grain boundary diffusion control model, J. Geophys. Res., 114, B06205.[2] Holtzmann B. K. (2016) Questions on the existence, persistence, and mechanical effects of a very small melt fraction in the asthenosphere, Geophys. Geochem. Geosyst. 17, 470-484.

  3. Is math anxiety in the secondary classroom limiting physics mastery? A study of math anxiety and physics performance

    NASA Astrophysics Data System (ADS)

    Mercer, Gary J.

    This quantitative study examined the relationship between secondary students with math anxiety and physics performance in an inquiry-based constructivist classroom. The Revised Math Anxiety Rating Scale was used to evaluate math anxiety levels. The results were then compared to the performance on a physics standardized final examination. A simple correlation was performed, followed by a multivariate regression analysis to examine effects based on gender and prior math background. The correlation showed statistical significance between math anxiety and physics performance. The regression analysis showed statistical significance for math anxiety, physics performance, and prior math background, but did not show statistical significance for math anxiety, physics performance, and gender.

  4. The independence of physical attractiveness and symptoms of depression in a female twin population.

    PubMed

    McGovern, R J; Neale, M C; Kendler, K S

    1996-03-01

    The relationship between physical attractiveness and symptoms of depression was investigated in a general population simple of 1,100 female twins. Photographs were rated by 4 raters. Symptoms of depression were measured by the Depression sub-scale of the SCL-54, by a self-rating based on the DSM-III-R, and by an MD diagnosis based on a structured interview (SCID). No relationships between ratings of physical attractiveness and symptoms of depression were found.

  5. User assessment of smoke-dispersion models for wildland biomass burning.

    Treesearch

    Steve Breyfogle; Sue A. Ferguson

    1996-01-01

    Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...

  6. Effect of lecture instruction on student performance on qualitative questions

    NASA Astrophysics Data System (ADS)

    Heron, Paula R. L.

    2015-06-01

    The impact of lecture instruction on student conceptual understanding in physics has been the subject of research for several decades. Most studies have reported disappointingly small improvements in student performance on conceptual questions despite direct instruction on the relevant topics. These results have spurred a number of attempts to improve learning in physics courses through new curricula and instructional techniques. This paper contributes to the research base through a retrospective analysis of 20 randomly selected qualitative questions on topics in kinematics, dynamics, electrostatics, waves, and physical optics that have been given in introductory calculus-based physics at the University of Washington over a period of 15 years. In some classes, questions were administered after relevant lecture instruction had been completed; in others, it had yet to begin. Simple statistical tests indicate that the average performance of the "after lecture" classes was significantly better than that of the "before lecture" classes for 11 questions, significantly worse for two questions, and indistinguishable for the remaining seven. However, the classes had not been randomly assigned to be tested before or after lecture instruction. Multiple linear regression was therefore conducted with variables (such as class size) that could plausibly lead to systematic differences in performance and thus obscure (or artificially enhance) the effect of lecture instruction. The regression models support the results of the simple tests for all but four questions. In those cases, the effect of lecture instruction was reduced to a nonsignificant level, or increased to a significant, negative level when other variables were considered. Thus the results provide robust evidence that instruction in lecture can increase student ability to give correct answers to conceptual questions but does not necessarily do so; in some cases it can even lead to a decrease.

  7. The influence of wind-tunnel walls on discrete frequency noise

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    This paper describes an analytical model that can be used to examine the effects of wind-tunnel walls on discrete frequency noise. First, a complete physical model of an acoustic source in a wind tunnel is described, and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. Second, the simplified physical model is formulated as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. The integral equation has been solved with a panel program on a computer. Preliminary results from a simple model problem will be shown and compared with the approximate analytic solution.

  8. Spatial analysis of the invasion of lionfish in the western Atlantic and Caribbean.

    PubMed

    Johnston, Matthew W; Purkis, Samuel J

    2011-06-01

    Pterois volitans and Pterois miles, two sub-species of lionfish, have become the first non-native, invasive marine fish established along the United States Atlantic coast and Caribbean. The route and timing of the invasion is poorly understood, however historical sightings and captures have been robustly documented since their introduction. Herein we analyze these records based on spatial location, dates of arrival, and prevailing physical factors at the capture sights. Using a cellular automata model, we examine the relationship between depth, salinity, temperature, and current, finding the latter as the most influential parameter for transport of lionfish to new areas. The model output is a synthetic validated reproduction of the lionfish invasion, upon which predictive simulations in other locations can be based. This predictive model is simple, highly adaptable, relies entirely on publicly available data, and is applicable to other species. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Spatiotemporal pattern in somitogenesis: a non-Turing scenario with wave propagation.

    PubMed

    Nagahara, Hiroki; Ma, Yue; Takenaka, Yoshiko; Kageyama, Ryoichiro; Yoshikawa, Kenichi

    2009-08-01

    Living organisms maintain their lives under far-from-equilibrium conditions by creating a rich variety of spatiotemporal structures in a self-organized manner, such as temporal rhythms, switching phenomena, and development of the body. In this paper, we focus on the dynamical process of morphogens in somitogenesis in mice where propagation of the gene expression level plays an essential role in creating the spatially periodic patterns of the vertebral columns. We present a simple discrete reaction-diffusion model which includes neighboring interaction through an activator, but not diffusion of an inhibitor. We can produce stationary periodic patterns by introducing the effect of spatial discreteness to the field. Based on the present model, we discuss the underlying physical principles that are independent of the details of biomolecular reactions. We also discuss the framework of spatial discreteness based on the reaction-diffusion model in relation to a cellular array, by comparison with an actual experimental observation.

  10. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work by the authors on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulence closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equation are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  11. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulent closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equations are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  12. Interpreting the cosmic far-infrared background anisotropies using a gas regulator model

    NASA Astrophysics Data System (ADS)

    Wu, Hao-Yi; Doré, Olivier; Teyssier, Romain; Serra, Paolo

    2018-04-01

    Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate (SFR) and the connection between baryons and dark matter across cosmic time. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. This model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. We model the large-scale galaxy bias and small-scale shot noise self-consistently, and we constrain our model using the CFIRB power spectra measured by Planck. Because of the simplicity of the physical model, the goodness of fit is limited. We compare our model predictions with the observed correlation between CFIRB and gravitational lensing, bolometric infrared luminosity functions, and submillimetre source counts. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 1012.5 M⊙ at z = 2, higher than the mass associated with the peak of the star formation efficiency. We also find that the far-infrared luminosities of haloes above 1012 M⊙ are higher than the expectation from the SFR observed in ultraviolet and optical surveys.

  13. SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Provost, Alden M.

    2002-01-01

    SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.

  14. Modelling erosion on a daily basis, an adaptation of the MMF approach

    NASA Astrophysics Data System (ADS)

    Shrestha, Dhruba Pikha; Jetten, Victor G.

    2018-02-01

    Effect of soil erosion causing negative impact on ecosystem services and food security is well known. On the other hand there can be yearly variation of total precipitation received in an area, with the presence of extreme rains. To assess annual erosion rates various empirical models have been extensively used in all the climatic regions. While these models are simple to operate and do not require lot of input data, the effect of extreme rain is not taken into account. Although physically based models are available to simulate erosion processes including particle detachment, transportation and deposition of sediments during a storm they are not applicable for assessing annual soil loss rates. Moreover storm event data may not be available everywhere prohibiting their extensive use.

  15. Finding the strong CP problem at the LHC

    NASA Astrophysics Data System (ADS)

    D'Agnolo, Raffaele Tito; Hook, Anson

    2016-11-01

    We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.

  16. Injury Profile SIMulator, a Qualitative Aggregative Modelling Framework to Predict Crop Injury Profile as a Function of Cropping Practices, and the Abiotic and Biotic Environment. I. Conceptual Bases

    PubMed Central

    Aubertot, Jean-Noël; Robin, Marie-Hélène

    2013-01-01

    The limitation of damage caused by pests (plant pathogens, weeds, and animal pests) in any agricultural crop requires integrated management strategies. Although significant efforts have been made to i) develop, and to a lesser extent ii) combine genetic, biological, cultural, physical and chemical control methods in Integrated Pest Management (IPM) strategies (vertical integration), there is a need for tools to help manage Injury Profiles (horizontal integration). Farmers design cropping systems according to their goals, knowledge, cognition and perception of socio-economic and technological drivers as well as their physical, biological, and chemical environment. In return, a given cropping system, in a given production situation will exhibit a unique injury profile, defined as a dynamic vector of the main injuries affecting the crop. This simple description of agroecosystems has been used to develop IPSIM (Injury Profile SIMulator), a modelling framework to predict injury profiles as a function of cropping practices, abiotic and biotic environment. Due to the tremendous complexity of agroecosystems, a simple holistic aggregative approach was chosen instead of attempting to couple detailed models. This paper describes the conceptual bases of IPSIM, an aggregative hierarchical framework and a method to help specify IPSIM for a given crop. A companion paper presents a proof of concept of the proposed approach for a single disease of a major crop (eyespot on wheat). In the future, IPSIM could be used as a tool to help design ex-ante IPM strategies at the field scale if coupled with a damage sub-model, and a multicriteria sub-model that assesses the social, environmental, and economic performances of simulated agroecosystems. In addition, IPSIM could also be used to help make diagnoses on commercial fields. It is important to point out that the presented concepts are not crop- or pest-specific and that IPSIM can be used on any crop. PMID:24019908

  17. Injury Profile SIMulator, a qualitative aggregative modelling framework to predict crop injury profile as a function of cropping practices, and the abiotic and biotic environment. I. Conceptual bases.

    PubMed

    Aubertot, Jean-Noël; Robin, Marie-Hélène

    2013-01-01

    The limitation of damage caused by pests (plant pathogens, weeds, and animal pests) in any agricultural crop requires integrated management strategies. Although significant efforts have been made to i) develop, and to a lesser extent ii) combine genetic, biological, cultural, physical and chemical control methods in Integrated Pest Management (IPM) strategies (vertical integration), there is a need for tools to help manage Injury Profiles (horizontal integration). Farmers design cropping systems according to their goals, knowledge, cognition and perception of socio-economic and technological drivers as well as their physical, biological, and chemical environment. In return, a given cropping system, in a given production situation will exhibit a unique injury profile, defined as a dynamic vector of the main injuries affecting the crop. This simple description of agroecosystems has been used to develop IPSIM (Injury Profile SIMulator), a modelling framework to predict injury profiles as a function of cropping practices, abiotic and biotic environment. Due to the tremendous complexity of agroecosystems, a simple holistic aggregative approach was chosen instead of attempting to couple detailed models. This paper describes the conceptual bases of IPSIM, an aggregative hierarchical framework and a method to help specify IPSIM for a given crop. A companion paper presents a proof of concept of the proposed approach for a single disease of a major crop (eyespot on wheat). In the future, IPSIM could be used as a tool to help design ex-ante IPM strategies at the field scale if coupled with a damage sub-model, and a multicriteria sub-model that assesses the social, environmental, and economic performances of simulated agroecosystems. In addition, IPSIM could also be used to help make diagnoses on commercial fields. It is important to point out that the presented concepts are not crop- or pest-specific and that IPSIM can be used on any crop.

  18. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow

    NASA Astrophysics Data System (ADS)

    Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  19. Using Laboratory Homework to Facilitate Skill Integration and Assess Understanding in Intermediate Physics Courses

    NASA Astrophysics Data System (ADS)

    Johnston, Marty; Jalkio, Jeffrey

    2013-04-01

    By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.

  20. Physics of MRI: a primer.

    PubMed

    Plewes, Donald B; Kucharczyk, Walter

    2012-05-01

    This article is based on an introductory lecture given for the past many years during the "MR Physics and Techniques for Clinicians" course at the Annual Meeting of the ISMRM. This introduction is not intended to be a comprehensive overview of the field, as the subject of magnetic resonance imaging (MRI) physics is large and complex. Rather, it is intended to lay a conceptual foundation by which magnetic resonance image formation can be understood from an intuitive perspective. The presentation is nonmathematical, relying on simple models that take the reader progressively from the basic spin physics of nuclei, through descriptions of how the magnetic resonance signal is generated and detected in an MRI scanner, the foundations of nuclear magnetic resonance (NMR) relaxation, and a discussion of the Fourier transform and its relation to MR image formation. The article continues with a discussion of how magnetic field gradients are used to facilitate spatial encoding and concludes with a development of basic pulse sequences and the factors defining image contrast. Copyright © 2012 Wiley Periodicals, Inc.

  1. Significance of modeling internal damping in the control of structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Inman, D. J.

    1992-01-01

    Several simple systems are examined to illustrate the importance of the estimation of damping parameters in closed-loop system performance and stability. The negative effects of unmodeled damping are particularly pronounced in systems that do not use collocated sensors and actuators. An example is considered for which even the actuators (a tip jet nozzle and flexible hose) for a simple beam produce significant damping which, if ignored, results in a model that cannot yield a reasonable time response using physically meaningful parameter values. It is concluded that correct damping modeling is essential in structure control.

  2. Experience in using a numerical scheme with artificial viscosity at solving the Riemann problem for a multi-fluid model of multiphase flow

    NASA Astrophysics Data System (ADS)

    Bulovich, S. V.; Smirnov, E. M.

    2018-05-01

    The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.

  3. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  4. Anomalous evolution of Ar metastable density with electron density in high density Ar discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Min; Chang, Hong-Young; You, Shin-Jae

    2011-10-15

    Recently, an anomalous evolution of argon metastable density with plasma discharge power (electron density) was reported [A. M. Daltrini, S. A. Moshkalev, T. J. Morgan, R. B. Piejak, and W. G. Graham, Appl. Phys. Lett. 92, 061504 (2008)]. Although the importance of the metastable atom and its density has been reported in a lot of literature, however, a basic physics behind the anomalous evolution of metastable density has not been clearly understood yet. In this study, we investigated a simple global model to elucidate the underlying physics of the anomalous evolution of argon metastable density with the electron density. Onmore » the basis of the proposed simple model, we reproduced the anomalous evolution of the metastable density and disclosed the detailed physics for the anomalous result. Drastic changes of dominant mechanisms for the population and depopulation processes of Ar metastable atoms with electron density, which take place even in relatively low electron density regime, is the clue to understand the result.« less

  5. Isothermal Circumstellar Dust Shell Model for Teaching

    ERIC Educational Resources Information Center

    Robinson, G.; Towers, I. N.; Jovanoski, Z.

    2009-01-01

    We introduce a model of radiative transfer in circumstellar dust shells. By assuming that the shell is both isothermal and its thickness is small compared to its radius, the model is simple enough for students to grasp and yet still provides a quantitative description of the relevant physical features. The isothermal model can be used in a…

  6. Random Walks on a Simple Cubic Lattice, the Multinomial Theorem, and Configurational Properties of Polymers

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2007-01-01

    Random-climb models enable undergraduate chemistry students to visualize polymer molecules, quantify their configurational properties, and relate molecular structure to a variety of physical properties. The model could serve as an introduction to more elaborate models of polymer molecules and could help in learning topics such as lattice models of…

  7. Full quantum mechanical analysis of atomic three-grating Mach–Zehnder interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanz, A.S., E-mail: asanz@iff.csic.es; Davidović, M.; Božić, M.

    2015-02-15

    Atomic three-grating Mach–Zehnder interferometry constitutes an important tool to probe fundamental aspects of the quantum theory. There is, however, a remarkable gap in the literature between the oversimplified models and robust numerical simulations considered to describe the corresponding experiments. Consequently, the former usually lead to paradoxical scenarios, such as the wave–particle dual behavior of atoms, while the latter make difficult the data analysis in simple terms. Here these issues are tackled by means of a simple grating working model consisting of evenly-spaced Gaussian slits. As is shown, this model suffices to explore and explain such experiments both analytically and numerically,more » giving a good account of the full atomic journey inside the interferometer, and hence contributing to make less mystic the physics involved. More specifically, it provides a clear and unambiguous picture of the wavefront splitting that takes place inside the interferometer, illustrating how the momentum along each emerging diffraction order is well defined even though the wave function itself still displays a rather complex shape. To this end, the local transverse momentum is also introduced in this context as a reliable analytical tool. The splitting, apart from being a key issue to understand atomic Mach–Zehnder interferometry, also demonstrates at a fundamental level how wave and particle aspects are always present in the experiment, without incurring in any contradiction or interpretive paradox. On the other hand, at a practical level, the generality and versatility of the model and methodology presented, makes them suitable to attack analogous problems in a simple manner after a convenient tuning. - Highlights: • A simple model is proposed to analyze experiments based on atomic Mach–Zehnder interferometry. • The model can be easily handled both analytically and computationally. • A theoretical analysis based on the combination of the position and momentum representations is considered. • Wave and particle aspects are shown to coexist within the same experiment, thus removing the old wave-corpuscle dichotomy. • A good agreement between numerical simulations and experimental data is found without appealing to best-fit procedures.« less

  8. Reactive underwater object inspection based on artificial electric sense.

    PubMed

    Lebastard, Vincent; Boyer, Frédéric; Lanneau, Sylvain

    2016-07-26

    Weakly electric fish can perform complex cognitive tasks based on extracting information from blurry electric images projected from their immediate environment onto their electro-sensitive skin. In particular they can be trained to recognize the intrinsic properties of objects such as their shape, size and electric nature. They do this by means of novel perceptual strategies that exploit the relations between the physics of a self-generated electric field, their body morphology and the ability to perform specific movement termed probing motor acts (PMAs). In this article we artificially reproduce and combine these PMAs to build an autonomous control strategy that allows an artificial electric sensor to find electrically contrasted objects, and to orbit around them based on a minimum set of measurements and simple reactive feedback control laws of the probe's motion. The approach does not require any simulation models and could be implemented on an autonomous underwater vehicle (AUV) equipped with artificial electric sense. The AUV has only to satisfy certain simple geometric properties, such as bi-laterally (left/right) symmetrical electrodes and possess a reasonably high aspect (length/width) ratio.

  9. Matrix Solution of Coupled Differential Equations and Looped Car Following Models

    ERIC Educational Resources Information Center

    McCartney, Mark

    2008-01-01

    A simple mathematical model for the behaviour of how vehicles follow each other along a looped stretch of road is described. The resulting coupled first order differential equations are solved using appropriate matrix techniques and the physical significance of the model is discussed. A number possible classroom exercises are suggested to help…

  10. Experiments with Helium-Filled Balloons

    ERIC Educational Resources Information Center

    Zable, Anthony C.

    2010-01-01

    The concepts of Newtonian mechanics, fluids, and ideal gas law physics are often treated as separate and isolated topics in the typical introductory college-level physics course, especially in the laboratory setting. To bridge these subjects, a simple experiment was developed that utilizes computer-based data acquisition sensors and a digital gram…

  11. Health/Fitness Instructor's Handbook.

    ERIC Educational Resources Information Center

    Howley, Edward T.; Franks, B. Don

    This book identifies the components of physical fitness that are related to positive health as distinct from the simple performance of specific motor tasks. The positive health concept is expanded to further clarify the relationship of physical fitness to total fitness. The disciplinary knowledge base that is essential for fitness professionals is…

  12. Automatic determination of fault effects on aircraft functionality

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1989-01-01

    The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.

  13. The physics of powerlifting

    NASA Astrophysics Data System (ADS)

    Radenković, Lazar; Nešić, Ljubiša

    2018-05-01

    The main contribution of this paper is didactic adaptation of the biomechanical analysis of the three main lifts in powerlifting (squat, bench press, deadlift). We used simple models that can easily be understood by undergraduate college students to estimate the values of various physical quantities during powerlifting. Specifically, we showed how plate choice affects the bench press and estimated spine loads and torques at hip and knee during lifting. Theoretical calculations showed good agreement with experimental data, proving that the models are valid.

  14. Mira variables: An informal review

    NASA Technical Reports Server (NTRS)

    Wing, R. F.

    1980-01-01

    The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.

  15. Fitting and Reconstruction of Thirteen Simple Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Al-Haddad, Nada; Nieves-Chinchilla, Teresa; Savani, Neel P.; Lugaz, Noé; Roussev, Ilia I.

    2018-05-01

    Coronal mass ejections (CMEs) are the main drivers of geomagnetic disturbances, but the effects of their interaction with Earth's magnetic field depend on their magnetic configuration and orientation. Fitting and reconstruction techniques have been developed to determine important geometrical and physical CME properties, such as the orientation of the CME axis, the CME size, and its magnetic flux. In many instances, there is disagreement between different methods but also between fitting from in situ measurements and reconstruction based on remote imaging. This could be due to the geometrical or physical assumptions of the models, but also to the fact that the magnetic field inside CMEs is only measured at one point in space as the CME passes over a spacecraft. In this article we compare three methods that are based on different assumptions for measurements by the Wind spacecraft for 13 CMEs from 1997 to 2015. These CMEs are selected from the interplanetary coronal mass ejections catalog on https://wind.nasa.gov/ICMEindex.php because of their simplicity in terms of: 1) slow expansion speed throughout the CME and 2) weak asymmetry in the magnetic field profile. This makes these 13 events ideal candidates for comparing codes that do not include expansion or distortion. We find that for these simple events, the codes are in relatively good agreement in terms of the CME axis orientation for six of the 13 events. Using the Grad-Shafranov technique, we can determine the shape of the cross-section, which is assumed to be circular for the other two models, a force-free fitting and a circular-cylindrical non force-free fitting. Five of the events are found to have a clear circular cross-section, even when this is not a precondition of the reconstruction. We make an initial attempt at evaluating the adequacy of the different assumptions for these simple CMEs. The conclusion of this work strongly suggests that attempts at reconciling in situ and remote-sensing views of CMEs must take into consideration the compatibility of the different models with specific CME structures to better reproduce flux ropes.

  16. A Flush Toilet Model for the Transistor

    ERIC Educational Resources Information Center

    Organtini, Giovanni

    2012-01-01

    In introductory physics textbooks, diodes working principles are usually well described in a relatively simple manner. According to our experience, they are well understood by students. Even when no formal derivation of the physics laws governing the current flow through a diode is given, the use of this device as a check valve is easily accepted.…

  17. Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.

    PubMed

    Bürger, Vincent; Briesen, Heiko

    2016-10-05

    For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.

  18. Physical layer security in fiber-optic MIMO-SDM systems: An overview

    NASA Astrophysics Data System (ADS)

    Guan, Kyle; Cho, Junho; Winzer, Peter J.

    2018-02-01

    Fiber-optic transmission systems provide large capacities over enormous distances but are vulnerable to simple eavesdropping attacks at the physical layer. We classify key-based and keyless encryption and physical layer security techniques and discuss them in the context of optical multiple-input-multiple-output space-division multiplexed (MIMO-SDM) fiber-optic communication systems. We show that MIMO-SDM not only increases system capacity, but also ensures the confidentiality of information transmission. Based on recent numerical and experimental results, we review how the unique channel characteristics of MIMO-SDM can be exploited to provide various levels of physical layer security.

  19. Model for diffuse interstellar clouds: improvements to the theory of molecular hydrogen photodestruction and to the gas phase chemistry of carbon monoxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Federman, S.R.

    1979-01-01

    A theoretical model has been developed to determine physical processes in conjunction with astrophysical observation. The calculations are based on isobaric, steady-state, plane-parallel conditions. In the model, the cloud is illuminated by ultraviolet radiation from one side. The density and temperature of the gas are derived by invoking energy conservation in terms of thermal balance. The derived values for density and temperature then are used to determine the abundances of approximately fifty atomic and molecular species, including important ionic species and simple carbon and oxygen bearing molecules. Except for molecular hydrogen formation on dust grains, binary gas phase reactions aremore » used to develop the chemistry of the model cloud. The theoretical model has been found to be appropriate for a particular range of physical parameters. The results of the steady-state calculations have been compared to ultraviolet observations, predominantly those made with the Copernicus satellite. The theory of molecular hydrogen photodestruction has been reexamined so that improvements to the model can be made. By analyzing the region where the atomic to molecuar hydrogen transition occurs, several processes have been found to contribute to dissociation.« less

  20. The epistemological status of general circulation models

    NASA Astrophysics Data System (ADS)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  1. An Introduction to Magnetospheric Physics by Means of Simple Models

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1981-01-01

    The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.

  2. Simple model for vibration-translation exchange at high temperatures: effects of multiquantum transitions on the relaxation of a N2 gas flow behind a shock.

    PubMed

    Aliat, A; Vedula, P; Josyula, E

    2011-02-01

    In this paper a simple model is proposed for computation of rate coefficients related to vibration-translation transitions based on the forced harmonic oscillator theory. This model, which is developed by considering a quadrature method, provides rate coefficients that are in very good agreement with those found in the literature for the high temperature regime (≳10,000 K). This model is implemented to study a one-dimensional nonequilibrium inviscid N(2) flow behind a plane shock by considering a state-to-state approach. While the effects of ionization and chemical reactions are neglected in our study, our results show that multiquantum transitions have a great influence on the relaxation of the macroscopic parameters of the gas flow behind the shock, especially on vibrational distributions of high levels. All vibrational states are influenced by multiquantum processes, but the effective number of transitions decreases inversely according to the vibrational quantum number. For the initial conditions considered in this study, excited electronic states are found to be weakly populated and can be neglected in modeling. Moreover, the computing time is considerably reduced with the model described in this paper compared to others found in the literature. ©2011 American Physical Society

  3. Study of Magnetic Damping Effect on Convection and Solidification Under G-Jitter Conditions

    NASA Technical Reports Server (NTRS)

    Li, Ben Q.; deGroh, H. C., III

    1999-01-01

    As shown by NASA resources dedicated to measuring residual gravity (SAMS and OARE systems), g-jitter is a critical issue affecting space experiments on solidification processing of materials. This study aims to provide, through extensive numerical simulations and ground based experiments, an assessment of the use of magnetic fields in combination with microgravity to reduce the g-jitter induced convective flows in space processing systems. We have so far completed asymptotic analyses based on the analytical solutions for g-jitter driven flow and magnetic field damping effects for a simple one-dimensional parallel plate configuration, and developed both 2-D and 3-D numerical models for g-jitter driven flows in simple solidification systems with and without presence of an applied magnetic field. Numerical models have been checked with the analytical solutions and have been applied to simulate the convective flows and mass transfer using both synthetic g-jitter functions and the g-jitter data taken from space flight. Some useful findings have been obtained from the analyses and the modeling results. Some key points may be summarized as follows: (1) the amplitude of the oscillating velocity decreases at a rate inversely proportional to the g-jitter frequency and with an increase in the applied magnetic field; (2) the induced flow approximately oscillates at the same frequency as the affecting g-jitter, but out of a phase angle; (3) the phase angle is a complicated function of geometry, applied magnetic field, temperature gradient and frequency; (4) g-jitter driven flows exhibit a complex fluid flow pattern evolving in time; (5) the damping effect is more effective for low frequency flows; and (6) the applied magnetic field helps to reduce the variation of solutal distribution along the solid-liquid interface. Work in progress includes numerical simulations and ground-based measurements. Both 2-D and 3-D numerical simulations are being continued to obtain further information on g-jitter driven flows and magnetic field effects. A physical model for ground-based measurements is completed and some measurements of the oscillating convection are being taken on the physical model. The comparison of the measurements with numerical simulations is in progress. Additional work planned in the project will also involve extending the 2-D numerical model to include the solidification phenomena with the presence of both g-jitter and magnetic fields.

  4. How-to-Do-It: Countercurrent Heat Exchange in Vertebrate Limbs.

    ERIC Educational Resources Information Center

    Franklin, George B.; Plakke, Ronald K.

    1988-01-01

    Describes principals of physics that are manifested in simple biological systems of heat conservation structures. Outlines materials needed, data collection, analysis, and discussion questions for construction and operation of two models, one that is a countercurrent heat exchange model and one that is not. (RT)

  5. The Free Energy in the Derrida-Retaux Recursive Model

    NASA Astrophysics Data System (ADS)

    Hu, Yueyun; Shi, Zhan

    2018-05-01

    We are interested in a simple max-type recursive model studied by Derrida and Retaux (J Stat Phys 156:268-290, 2014) in the context of a physics problem, and find a wide range for the exponent in the free energy in the nearly supercritical regime.

  6. Versatile microrobotics using simple modular subunits

    NASA Astrophysics Data System (ADS)

    Cheang, U. Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun

    2016-07-01

    The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size.

  7. Versatile microrobotics using simple modular subunits

    PubMed Central

    Cheang, U Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun

    2016-01-01

    The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size. PMID:27464852

  8. Reading Time as Evidence for Mental Models in Understanding Physics

    NASA Astrophysics Data System (ADS)

    Brookes, David T.; Mestre, José; Stine-Morrow, Elizabeth A. L.

    2007-11-01

    We present results of a reading study that show the usefulness of probing physics students' cognitive processing by measuring reading time. According to contemporary discourse theory, when people read a text, a network of associated inferences is activated to create a mental model. If the reader encounters an idea in the text that conflicts with existing knowledge, the construction of a coherent mental model is disrupted and reading times are prolonged, as measured using a simple self-paced reading paradigm. We used this effect to study how "non-Newtonian" and "Newtonian" students create mental models of conceptual systems in physics as they read texts related to the ideas of Newton's third law, energy, and momentum. We found significant effects of prior knowledge state on patterns of reading time, suggesting that students attempt to actively integrate physics texts with their existing knowledge.

  9. Shift scheduling model considering workload and worker’s preference for security department

    NASA Astrophysics Data System (ADS)

    Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT

    2018-04-01

    Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.

  10. Cosmology. A first course

    NASA Astrophysics Data System (ADS)

    Lachieze-Rey, Marc

    This book delivers a quantitative account of the science of cosmology, designed for a non-specialist audience. The basic principles are outlined using simple maths and physics, while still providing rigorous models of the Universe. It offers an ideal introduction to the key ideas in cosmology, without going into technical details. The approach used is based on the fundamental ideas of general relativity such as the spacetime interval, comoving coordinates, and spacetime curvature. It provides an up-to-date and thoughtful discussion of the big bang, and the crucial questions of structure and galaxy formation. Questions of method and philosophical approaches in cosmology are also briefly discussed. Advanced undergraduates in either physics or mathematics would benefit greatly from use either as a course text or as a supplementary guide to cosmology courses.

  11. LANDPLANER (LANDscape, Plants, LANdslide and ERosion): a model to describe the dynamic response of slopes (or basins) under different changing scenarios

    NASA Astrophysics Data System (ADS)

    Rossi, Mauro; Torri, Dino; Santi, Elisa; Bacaro, Giovanni; Marchesini, Ivan

    2014-05-01

    Landslide phenomena and erosion processes are widespread and cause every year extensive damages to the environment and sensible reduction of ecosystem services. These processes are in competition among them, and their complex interaction control the landscapes evolution. Landslide phenomena and erosion processes can be strongly influenced by land use, vegetation, soil characteristics and anthropic actions. Such type of phenomena are mainly model separately using empirical and physically based approaches. The former rely upon the identification of simple empirical laws correlating/relating the occurrence of instability processes to some of their potential causes. The latter are based on physical descriptions of the processes, and depending on the degree of complexity they can integrate different variables characterizing the process and their trigger. Those model often couple an hydrological model with an erosion or a landslide model. The spatial modeling schemas are heterogeneous, but mostly the raster (i.e. matrices of data) or the conceptual (i.e. cascading planes and channels) description of the terrain are used. The two model types are generally designed and applied at different scales. Empirical models, less demanding in terms of input data cannot consider explicitly the real process triggering mechanisms and commonly they are exploited to assess the potential occurrence of instability phenomena over large areas (small scale assessment). Physically-based models are high-demanding in term of input data, difficult to obtain over large areas if not with large uncertainty, and their applicability is often limited to small catchments or single slopes (large scale assessment). More those models, even if physically-based, are simplified description of the instability processes and can neglect significant issues of the real triggering mechanisms. For instance the influence of vegetation has been considered just partially. Although in the literature a variety of model approaches have been proposed to model separately landslide and erosion processes, only few attempts were made to model both jointly, mostly integrating pre-existing models. To overcome this limitation we develop a new model called LANDPLANER (LANDscape, Plants, LANdslide and ERosion), specifically design to describe the dynamic response of slopes (or basins) under different changing scenarios including: (i) changes of meteorological factors, (ii) changes of vegetation or land-use, (iii) and changes of slope morphology. The was applied in different study area in order to check its basic assumptions, and to test its general operability and applicability. Results show a reasonable model behaviors and confirm its easy applicability in real cases.

  12. Ionospheric Storm Reconstructions with a Multimodel Ensemble Prdiction System (MEPS) of Data Assimilation Models: Mid and Low Latitude Dynamics

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.

    2016-12-01

    As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.

  13. Modeling of Non-isothermal Austenite Formation in Spring Steel

    NASA Astrophysics Data System (ADS)

    Huang, He; Wang, Baoyu; Tang, Xuefeng; Li, Junling

    2017-12-01

    The austenitization kinetics description of spring steel 60Si2CrA plays an important role in providing guidelines for industrial production. The dilatometric curves of 60Si2CrA steel were measured using a dilatometer DIL805A at heating rates of 0.3 K to 50 K/s (0.3 °C/s to 50 °C/s). Based on the dilatometric curves, a unified kinetics model using the internal state variable (ISV) method was derived to describe the non-isothermal austenitization kinetics of 60Si2CrA, and the abovementioned model models the incubation and transition periods. The material constants in the model were determined using a genetic algorithm-based optimization technique. Additionally, good agreement between predicted and experimental volume fractions of transformed austenite was obtained, indicating that the model is effective for describing the austenitization kinetics of 60Si2CrA steel. Compared with other modeling methods of austenitization kinetics, this model, which uses the ISV method, has some advantages, such as a simple formula and explicit physics meaning, and can be probably used in engineering practice.

  14. A computer model of context-dependent perception in a very simple world

    NASA Astrophysics Data System (ADS)

    Lara-Dammer, Francisco; Hofstadter, Douglas R.; Goldstone, Robert L.

    2017-11-01

    We propose the foundations of a computer model of scientific discovery that takes into account certain psychological aspects of human observation of the world. To this end, we simulate two main components of such a system. The first is a dynamic microworld in which physical events take place, and the second is an observer that visually perceives entities and events in the microworld. For reason of space, this paper focuses only on the starting phase of discovery, which is the relatively simple visual inputs of objects and collisions.

  15. Physics Almost Saved the President! Electromagnetic Induction and the Assassination of James Garfield: A Teaching Opportunity in Introductory Physics

    ERIC Educational Resources Information Center

    Overduin, James; Molloy, Dana; Selway, Jim

    2014-01-01

    Electromagnetic induction is probably one of the most challenging subjects for students in the introductory physics sequence, especially in algebra-based courses. Yet it is at the heart of many of the devices we rely on today. To help students grasp and retain the concept, we have put together a simple and dramatic classroom demonstration that…

  16. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.

  17. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  18. Simple analytical model reveals the functional role of embodied sensorimotor interaction in hexapod gaits

    PubMed Central

    Aoi, Shinya; Nachstedt, Timo; Manoonpong, Poramate; Wörgötter, Florentin; Matsuno, Fumitoshi

    2018-01-01

    Insects have various gaits with specific characteristics and can change their gaits smoothly in accordance with their speed. These gaits emerge from the embodied sensorimotor interactions that occur between the insect’s neural control and body dynamic systems through sensory feedback. Sensory feedback plays a critical role in coordinated movements such as locomotion, particularly in stick insects. While many previously developed insect models can generate different insect gaits, the functional role of embodied sensorimotor interactions in the interlimb coordination of insects remains unclear because of their complexity. In this study, we propose a simple physical model that is amenable to mathematical analysis to explain the functional role of these interactions clearly. We focus on a foot contact sensory feedback called phase resetting, which regulates leg retraction timing based on touchdown information. First, we used a hexapod robot to determine whether the distributed decoupled oscillators used for legs with the sensory feedback generate insect-like gaits through embodied sensorimotor interactions. The robot generated two different gaits and one had similar characteristics to insect gaits. Next, we proposed the simple model as a minimal model that allowed us to analyze and explain the gait mechanism through the embodied sensorimotor interactions. The simple model consists of a rigid body with massless springs acting as legs, where the legs are controlled using oscillator phases with phase resetting, and the governed equations are reduced such that they can be explained using only the oscillator phases with some approximations. This simplicity leads to analytical solutions for the hexapod gaits via perturbation analysis, despite the complexity of the embodied sensorimotor interactions. This is the first study to provide an analytical model for insect gaits under these interaction conditions. Our results clarified how this specific foot contact sensory feedback contributes to generation of insect-like ipsilateral interlimb coordination during hexapod locomotion. PMID:29489831

  19. Tailored motivational message generation: A model and practical framework for real-time physical activity coaching.

    PubMed

    Op den Akker, Harm; Cabrita, Miriam; Op den Akker, Rieks; Jones, Valerie M; Hermens, Hermie J

    2015-06-01

    This paper presents a comprehensive and practical framework for automatic generation of real-time tailored messages in behavior change applications. Basic aspects of motivational messages are time, intention, content and presentation. Tailoring of messages to the individual user may involve all aspects of communication. A linear modular system is presented for generating such messages. It is explained how properties of user and context are taken into account in each of the modules of the system and how they affect the linguistic presentation of the generated messages. The model of motivational messages presented is based on an analysis of existing literature as well as the analysis of a corpus of motivational messages used in previous studies. The model extends existing 'ontology-based' approaches to message generation for real-time coaching systems found in the literature. Practical examples are given on how simple tailoring rules can be implemented throughout the various stages of the framework. Such examples can guide further research by clarifying what it means to use e.g. user targeting to tailor a message. As primary example we look at the issue of promoting daily physical activity. Future work is pointed out in applying the present model and framework, defining efficient ways of evaluating individual tailoring components, and improving effectiveness through the creation of accurate and complete user- and context models. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Transport properties and efficiency of elastically coupled particles in asymmetric periodic potentials

    NASA Astrophysics Data System (ADS)

    Igarashi, Akito; Tsukamoto, Shinji

    2000-02-01

    Biological molecular motors drive unidirectional transport and transduce chemical energy to mechanical work. In order to identify this energy conversion which is a common feature of molecular motors, many workers have studied various physical models, which consist of Brownian particles in spatially periodic potentials. Most of the models are, however, based on "single-particle" dynamics and too simple as models for biological motors, especially for actin-myosin motors, which cause muscle contraction. In this paper, particles coupled by elastic strings in an asymmetric periodic potential are considered as a model for the motors. We investigate the dynamics of the model and calculate the efficiency of energy conversion with the use of molecular dynamical method. In particular, we find that the velocity and efficiency of the elastically coupled particles where the natural length of the springs is incommensurable with the period of the periodic potential are larger than those of the corresponding single particle model.

  1. Data management in the mission data system

    NASA Technical Reports Server (NTRS)

    Wagner, David A.

    2005-01-01

    As spacecraft evolve from simple embedded devices to become more sophisticated computing platforms with complex behaviors it is increasingly necessary to model and manage the flow of data, and to provide uniform models for managing data that promote adaptability, yet pay heed to the physical limitations of the embedded and space environments.

  2. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao Veerabhadra; Robey, Robert W.

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  3. Zipper model for the melting of thin films

    NASA Astrophysics Data System (ADS)

    Abdullah, Mikrajuddin; Khairunnisa, Shafira; Akbar, Fathan

    2016-01-01

    We propose an alternative model to Lindemann’s criterion for melting that explains the melting of thin films on the basis of a molecular zipper-like mechanism. Using this model, a unique criterion for melting is obtained. We compared the results of the proposed model with experimental data of melting points and heat of fusion for many materials and obtained interesting results. The interesting thing reported here is how complex physics problems can sometimes be modeled with simple objects around us that seemed to have no correlation. This kind of approach is sometimes very important in physics education and should always be taught to undergraduate or graduate students.

  4. Theory of Earth

    NASA Astrophysics Data System (ADS)

    Anderson, D. L.

    2014-12-01

    Earth is an isolated, cooling planet that obeys the 2nd law. Interior dynamics is driven from the top, by cold sinking slabs. High-resolution broad-band seismology and geodesy has confirmed that mantle flow is characterized by narrow downwellings and ~20 broad slowly rising updrafts. The low-velocity zone (LVZ) consists of a hot melange of sheared peridotite intruded with aligned melt-rich lamellae that are tapped by intraplate volcanoes. The high temperature is a simple consequence of the thermal overshoot common in large bodies of convecting fluids. The transition zone consists of ancient eclogite layers that are displaced upwards by slabs to become broad passive, and cool, ridge feeding updrafts of ambient mantle. The physics that is overlooked in canonical models of mantle dynamics and geochemistry includes; the 2nd law, convective overshoots, subadiabaticity, wave-melt interactions, Archimedes' principle, and kinetics (rapid transitions allow stress-waves to interact with melting and phase changes, creating LVZs; sluggish transitions in cold slabs keep eclogite in the TZ where it warms up by extracting heat from mantle below 650 km, creating the appearance of slab penetration). Canonical chemical geodynamic models are the exact opposite of physics and thermodynamic based models and of the real Earth. A model that results from inverting the assumptions regarding initial and boundary conditions (hot origin, secular cooling, no external power sources, cooling internal boundaries, broad passive upwellings, adiabaticity and whole-mantle convection not imposed, layering and self-organization allowed) results in a thick refractory-yet-fertile surface layer, with ancient xenoliths and cratons at the top and a hot overshoot at the base, and a thin mobile D" layer that is an unlikely plume generation zone. Accounting for the physics that is overlooked, or violated (2nd law), in canonical models, plus modern seismology, undermines the assumptions and conclusions of these models.

  5. Models to capture the potential for disease transmission in domestic sheep flocks.

    PubMed

    Schley, David; Whittle, Sophie; Taylor, Michael; Kiss, Istvan Zoltan

    2012-09-15

    Successful control of livestock diseases requires an understanding of how they spread amongst animals and between premises. Mathematical models can offer important insight into the dynamics of disease, especially when built upon experimental and/or field data. Here the dynamics of a range of epidemiological models are explored in order to determine which models perform best in capturing real-world heterogeneities at sufficient resolution. Individual based network models are considered together with one- and two-class compartmental models, for which the final epidemic size is calculated as a function of the probability of disease transmission occurring during a given physical contact between two individuals. For numerical results the special cases of a viral disease with a fast recovery rate (foot-and-mouth disease) and a bacterial disease with a slow recovery rate (brucellosis) amongst sheep are considered. Quantitative results from observational studies of physical contact amongst domestic sheep are applied and results from the differently structured flocks (ewes with newborn lambs, ewes with nearly weaned lambs and ewes only) compared. These indicate that the breeding cycle leads to significant changes in the expected basic reproduction ratio of diseases. The observed heterogeneity of contacts amongst animals is best captured by full network simulations, although simple compartmental models describe the key features of an outbreak but, as expected, often overestimate the speed of an outbreak. Here the weights of contacts are heterogeneous, with many low weight links. However, due to the well-connected nature of the networks, this has little effect and differences between models remain small. These results indicate that simple compartmental models can be a useful tool for modelling real-world flocks; their applicability will be greater still for more homogeneously mixed livestock, which could be promoted by higher intensity farming practices. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Dataflow models for fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. M.

    1984-01-01

    Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.

  7. Statistical Mechanics of the US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward D.; Broedersz, Chase P.; Bialek, William

    2015-07-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.

  8. Some Key Issues in Creating Inquiry-Based Instructional Practices that Aim at the Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Kock, Zeger-Jan; Taconis, Ruurd; Bolhuis, Sanneke; Gravemeijer, Koeno

    2013-01-01

    Many students in secondary schools consider the sciences difficult and unattractive. This applies to physics in particular, a subject in which students attempt to learn and understand numerous theoretical concepts, often without much success. A case in point is the understanding of the concepts current, voltage and resistance in simple electric…

  9. "Dissection" of a Hair Dryer

    ERIC Educational Resources Information Center

    Eisenstein, Stan; Simpson, Jeff

    2008-01-01

    The electrical design of the common hair dryer is based almost entirely on relatively simple principles learned in introductory physics classes. Just as biology students dissect a frog to see the principles of anatomy in action, physics students can "dissect" a hair dryer to see how principles of electricity are used in a real system. They can…

  10. A Materials Index--Its Storage, Retrieval, and Display

    ERIC Educational Resources Information Center

    Rosen, Carol Z.

    1973-01-01

    An experimental procedure for indexing physical materials based on simple syntactical rules was tested by encoding the materials in the journal, Applied Physics Letters,'' to produce a materials index. The syntax and numerous examples together with an indication of the method by which retrieval can be effected are presented. (5 references)…

  11. Wind tunnel simulation of air pollution dispersion in a street canyon.

    PubMed

    Civis, Svatopluk; Strizík, Michal; Janour, Zbynek; Holpuch, Jan; Zelinger, Zdenek

    2002-01-01

    Physical simulation was used to study pollution dispersion in a street canyon. The street canyon model was designed to study the effect of measuring flow and concentration fields. A method of C02-laser photoacoustic spectrometry was applied for detection of trace concentration of gas pollution. The advantage of this method is its high sensitivity and broad dynamic range, permitting monitoring of concentrations from trace to saturation values. Application of this method enabled us to propose a simple model based on line permeation pollutant source, developed on the principle of concentration standards, to ensure high precision and homogeneity of the concentration flow. Spatial measurement of the concentration distribution inside the street canyon was performed on the model with reference velocity of 1.5 m/s.

  12. Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuc H.; Matzner, Richard A.

    2012-01-01

    We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.

  13. CADDIS Volume 2. Sources, Stressors and Responses: Physical Habitat - Simple Conceptual Diagram

    EPA Pesticide Factsheets

    Introduction to the Physical Habitat module, when to list Physical Habitat as a candidate cause, ways to measure Physical Habitat, simple and detailed conceptual diagrams for Physical Habitat, Physical Habitat module references and literature reviews.

  14. The Scallop's Eye--A Concave Mirror in the Context of Biology

    ERIC Educational Resources Information Center

    Colicchia, Giuseppe; Waltner, Christine; Hopf, Martin; Wiesner, Hartmut

    2009-01-01

    Teaching physics in the context of medicine or biology is a way to generate students' interest in physics. A more uncommon type of eye, the scallop's eye (an eye with a spherical concave mirror, which is similar to a Newtonian or Schmidt telescope) and the image-forming mechanism in this eye are described. Also, a simple eye model, which can…

  15. The Motion of a Leaking Oscillator: A Study for the Physics Class

    ERIC Educational Resources Information Center

    Rodrigues, Hilário; Panza, Nelson; Portes, Dirceu; Soares, Alexandre

    2014-01-01

    This paper is essentially about the general form of Newton's second law for variable mass problems. We develop a model for describing the motion of the one-dimensional oscillator with a variable mass within the framework of classroom physics. We present a simple numerical procedure for the solution of the equation of motion of the system to…

  16. A Simple Introduction to Physical Health Impairments: A Series for Caregivers of Infants and Toddlers. Model of Interdisciplinary Training for Children with Handicaps (MITCH).

    ERIC Educational Resources Information Center

    Monroe County School District, Key West, FL.

    Intended for use in Florida training programs for caregivers of infants and toddlers with disabilities, this booklet describes some of the more common physical and health impairments that can affect young children. For each disability, the description generally stresses typical characteristics and special requirements. Addresses and telephone…

  17. A Physics Heptathlon: Simple Models of Seven Sporting Events

    ERIC Educational Resources Information Center

    Spathopoulos, Vassilios McInnes

    2010-01-01

    Anything that can capture the interest of students can be used to enhance the teaching of physics, and sport is practised, watched and followed fanatically by almost every young person. At the same time, in recent years, a wealth of research data has become available from the field of sports science. The purpose of this article is to draw from…

  18. An upper limb robot model of children limb for cerebral palsy neurorehabilitation.

    PubMed

    Pathak, Yagna; Johnson, Michelle

    2012-01-01

    Robot therapy has emerged in the last few decades as a tool to help patients with neurological injuries relearn motor tasks and improve their quality of life. The main goal of this study was to develop a simple model of the human arm for children affected with cerebral palsy (CP). The Simulink based model presented here shows a comparison for children with and without disabilities (ages 6-15) with normal and reduced range of motion in the upper limb. The model incorporates kinematic and dynamic considerations required for activities of daily living. The simulation was conducted using Matlab/Simulink and will eventually be integrated with a robotic counterpart to develop a physical robot that will provide assistance in activities of daily life (ADLs) to children with CP while also aiming to improve motor recovery.

  19. A new simple asymmetric hysteresis operator and its application to inverse control of piezoelectric actuators.

    PubMed

    Badel, A; Qiu, J; Nakano, T

    2008-05-01

    Piezoelectric actuators (PEAs) are commonly used as micropositioning devices due to their high resolution, high stiffness, and fast frequency response. Because piezoceramic materials are ferroelectric, they fundamentally exhibit hysteresis behavior in their response to an applied electric field. The positioning precision can be significantly reduced due to nonlinear hysteresis effects when PEAs are used in relatively long range applications. This paper describes a new, precise, and simple asymmetric hysteresis operator dedicated to PEAs. The complex hysteretic transfer characteristic has been considered in a purely phenomenological way, without taking into account the underlying physics. This operator is based on two curves. The first curve corresponds to the main ascending branch and is modeled by the function f1. The second curve corresponds to the main reversal branch and is modeled by the function g2. The functions f(1) and g(2) are two very simple hyperbola functions with only three parameters. Particular ascending and reversal branches are deduced from appropriate translations of f(1) and g(2). The efficiency and precision of the proposed approach is demonstrated, in practice, by a real-time inverse feed-forward controller for piezoelectric actuators. Advantages and drawbacks of the proposed approach compared with classical hysteresis operators are discussed.

  20. Is realistic neuronal modeling realistic?

    PubMed Central

    Almog, Mara

    2016-01-01

    Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. PMID:27535372

  1. The computation of lipophilicities of ⁶⁴Cu PET systems based on a novel approach for fluctuating charges.

    PubMed

    Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger

    2013-08-21

    A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.

  2. The Influence of AN Interacting Vacuum Energy on the Gravitational Collapse of a Star Fluid

    NASA Astrophysics Data System (ADS)

    Campos, M.

    2014-02-01

    To explain the accelerated expansion of the universe, models with interacting dark components has been considered in the literature. Generally, the dark energy component is physically interpreted as the vacuum energy. However, at the other side of the same coin, the influence of the vacuum energy in the gravitational collapse is a topic of scientific interest. Based in a simple assumption on the collapsed rate of the matter fluid density that is altered by the inclusion of a vacuum energy component that interacts with the matter fluid, we study the final fate of the collapse process.

  3. Vertical cultural transmission effects on demic front propagation: Theory and application to the Neolithic transition in Europe

    NASA Astrophysics Data System (ADS)

    Fort, Joaquim

    2011-05-01

    It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined.

  4. Nosehouse: heat-conserving ventilators based on nasal counterflow exchangers.

    PubMed

    Vogel, Steven

    2009-12-01

    Small birds and mammals commonly minimize respiratory heat loss with reciprocating counterflow exchangers in their nasal passageways. These animals extract heat from the air in an exhalation to warm those passageways and then use that heat to warm the subsequent inhalation. Although the near-constant volume of buildings precludes direct application of the device, a pair of such exchangers located remotely from each other circumvents that problem. A very simple and crudely constructed small-scale physical model of the device worked well enough as a heat conserver to suggest utility as a ventilator for buildings.

  5. Oxide films state analysis by IR spectroscopy based on the simple oscillator approximation

    NASA Astrophysics Data System (ADS)

    Volkov, N. V.; Yakutkina, T. V.; Karpova, V. V.

    2017-05-01

    Stabilization of structure-phase state in a wide temperature range is one of the most important problems of improving properties of oxide compounds. As such, the search of new effective methods for obtaining metal oxides with desired physic-chemical, electro-physical and thermal properties and their control is important and relevant. The aim of this work is identification features state of the oxide films of some metals Be, Al, Fe, Cu, Zr on the metal surface of the polycrystalline samples by infrared spectroscopy. To identify the resonance emission bands the algorithm of IR-spectra processing was developed and realized on the basis of table processor EXCEL-2010, which allow revealing characteristic resonance bands successfully and identification of inorganic chemical compounds. In the frame of simple oscillator model, resonance frequencies of normal vibrations of water and some inorganic compounds: metal oxides - Be, Al, Fe, Cu, Zr were calculated and characteristic frequencies for different states (aggregate, deformation, phase) were specified. By means of IR-spectroscopy fundamental possibility of revealing oxides films on metal substrate features state is shown, that allow development and optimization of the technology for production of the oxide films with desired properties.

  6. Understanding the Physical Optics Phenomena by Using a Digital Application for Light Propagation

    NASA Astrophysics Data System (ADS)

    Sierra-Sosa, Daniel-Esteban; Ángel-Toro, Luciano

    2011-01-01

    Understanding the light propagation on the basis of the Huygens-Fresnel principle stands for a fundamental factor for deeper comprehension of different physical optics related phenomena like diffraction, self-imaging, image formation, Fourier analysis and spatial filtering. This constitutes the physical approach of the Fourier optics whose principles and applications have been developed since the 1950's. Both for analytical and digital applications purposes, light propagation can be formulated in terms of the Fresnel Integral Transform. In this work, a digital optics application based on the implementation of the Discrete Fresnel Transform (DFT), and addressed to serve as a tool for applications in didactics of optics is presented. This tool allows, at a basic and intermediate learning level, exercising with the identification of basic phenomena, and observing changes associated with modifications of physical parameters. This is achieved by using a friendly graphic user interface (GUI). It also assists the user in the development of his capacity for abstracting and predicting the characteristics of more complicated phenomena. At an upper level of learning, the application could be used to favor a deeper comprehension of involved physics and models, and experimenting with new models and configurations. To achieve this, two characteristics of the didactic tool were taken into account when designing it. First, all physical operations, ranging from simple diffraction experiments to digital holography and interferometry, were developed on the basis of the more fundamental concept of light propagation. Second, the algorithm was conceived to be easily upgradable due its modular architecture based in MATLAB® software environment. Typical results are presented and briefly discussed in connection with didactics of optics.

  7. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.

    2016-06-01

    The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.

  8. Inclusion of Linearized Moist Physics in Nasa's Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald; Gelaro, Ronaldo; Kim, Jong G.

    2013-01-01

    Inclusion of moist physics in the linearized version of a weather forecast model is beneficial in terms of variational data assimilation. Further, it improves the capability of important tools, such as adjoint-based observation impacts and sensitivity studies. A linearized version of the relaxed Arakawa-Schubert (RAS) convection scheme has been developed and tested in NASA's Goddard Earth Observing System data assimilation tools. A previous study of the RAS scheme showed it to exhibit reasonable linearity and stability. This motivates the development of a linearization of a near-exact version of the RAS scheme. Linearized large-scale condensation is included through simple conversion of supersaturation into precipitation. The linearization of moist physics is validated against the full nonlinear model for 6- and 24-h intervals, relevant to variational data assimilation and observation impacts, respectively. For a small number of profiles, sudden large growth in the perturbation trajectory is encountered. Efficient filtering of these profiles is achieved by diagnosis of steep gradients in a reduced version of the operator of the tangent linear model. With filtering turned on, the inclusion of linearized moist physics increases the correlation between the nonlinear perturbation trajectory and the linear approximation of the perturbation trajectory. A month-long observation impact experiment is performed and the effect of including moist physics on the impacts is discussed. Impacts from moist-sensitive instruments and channels are increased. The effect of including moist physics is examined for adjoint sensitivity studies. A case study examining an intensifying Northern Hemisphere Atlantic storm is presented. The results show a significant sensitivity with respect to moisture.

  9. Quasi-steady aerodynamic model of clap-and-fling flapping MAV and validation using free-flight data.

    PubMed

    Armanini, S F; Caetano, J V; Croon, G C H E de; Visser, C C de; Mulder, M

    2016-06-30

    Flapping-wing aerodynamic models that are accurate, computationally efficient and physically meaningful, are challenging to obtain. Such models are essential to design flapping-wing micro air vehicles and to develop advanced controllers enhancing the autonomy of such vehicles. In this work, a phenomenological model is developed for the time-resolved aerodynamic forces on clap-and-fling ornithopters. The model is based on quasi-steady theory and accounts for inertial, circulatory, added mass and viscous forces. It extends existing quasi-steady approaches by: including a fling circulation factor to account for unsteady wing-wing interaction, considering real platform-specific wing kinematics and different flight regimes. The model parameters are estimated from wind tunnel measurements conducted on a real test platform. Comparison to wind tunnel data shows that the model predicts the lift forces on the test platform accurately, and accounts for wing-wing interaction effectively. Additionally, validation tests with real free-flight data show that lift forces can be predicted with considerable accuracy in different flight regimes. The complete parameter-varying model represents a wide range of flight conditions, is computationally simple, physically meaningful and requires few measurements. It is therefore potentially useful for both control design and preliminary conceptual studies for developing new platforms.

  10. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Interferometry 1.1 Chapter Overview In this Section, we introduce the physics -based principles of optical interferometry, thereby providing a foundation for...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for identifying...mathematical conditions for wrap invariance to a physical condition on aperture placement is more intuitive when considering the raw phase measurements as

  11. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Chapter 1 Fundamentals of Optical Interferometry 1.1 Chapter Overview In this chapter, we introduce the physics -based principles of optical...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for... physical condition on aperture placement is more intuitive when considering the raw phase measurements as opposed to their closures. For this reason

  12. Biophysical and structural considerations for protein sequence evolution

    PubMed Central

    2011-01-01

    Background Protein sequence evolution is constrained by the biophysics of folding and function, causing interdependence between interacting sites in the sequence. However, current site-independent models of sequence evolutions do not take this into account. Recent attempts to integrate the influence of structure and biophysics into phylogenetic models via statistical/informational approaches have not resulted in expected improvements in model performance. This suggests that further innovations are needed for progress in this field. Results Here we develop a coarse-grained physics-based model of protein folding and binding function, and compare it to a popular informational model. We find that both models violate the assumption of the native sequence being close to a thermodynamic optimum, causing directional selection away from the native state. Sampling and simulation show that the physics-based model is more specific for fold-defining interactions that vary less among residue type. The informational model diffuses further in sequence space with fewer barriers and tends to provide less support for an invariant sites model, although amino acid substitutions are generally conservative. Both approaches produce sequences with natural features like dN/dS < 1 and gamma-distributed rates across sites. Conclusions Simple coarse-grained models of protein folding can describe some natural features of evolving proteins but are currently not accurate enough to use in evolutionary inference. This is partly due to improper packing of the hydrophobic core. We suggest possible improvements on the representation of structure, folding energy, and binding function, as regards both native and non-native conformations, and describe a large number of possible applications for such a model. PMID:22171550

  13. Heat transfer from nanoparticles: A corresponding state analysis

    PubMed Central

    Merabia, Samy; Shenogin, Sergei; Joly, Laurent; Keblinski, Pawel; Barrat, Jean-Louis

    2009-01-01

    In this contribution, we study situations in which nanoparticles in a fluid are strongly heated, generating high heat fluxes. This situation is relevant to experiments in which a fluid is locally heated by using selective absorption of radiation by solid particles. We first study this situation for different types of molecular interactions, using models for gold particles suspended in octane and in water. As already reported in experiments, very high heat fluxes and temperature elevations (leading eventually to particle destruction) can be observed in such situations. We show that a very simple modeling based on Lennard–Jones (LJ) interactions captures the essential features of such experiments and that the results for various liquids can be mapped onto the LJ case, provided a physically justified (corresponding state) choice of parameters is made. Physically, the possibility of sustaining very high heat fluxes is related to the strong curvature of the interface that inhibits the formation of an insulating vapor film. PMID:19571000

  14. Migration of cells in a social context

    PubMed Central

    Vedel, Søren; Tay, Savaş; Johnston, Darius M.; Bruus, Henrik; Quake, Stephen R.

    2013-01-01

    In multicellular organisms and complex ecosystems, cells migrate in a social context. Whereas this is essential for the basic processes of life, the influence of neighboring cells on the individual remains poorly understood. Previous work on isolated cells has observed a stereotypical migratory behavior characterized by short-time directional persistence with long-time random movement. We discovered a much richer dynamic in the social context, with significant variations in directionality, displacement, and speed, which are all modulated by local cell density. We developed a mathematical model based on the experimentally identified “cellular traffic rules” and basic physics that revealed that these emergent behaviors are caused by the interplay of single-cell properties and intercellular interactions, the latter being dominated by a pseudopod formation bias mediated by secreted chemicals and pseudopod collapse following collisions. The model demonstrates how aspects of complex biology can be explained by simple rules of physics and constitutes a rapid test bed for future studies of collective migration of individual cells. PMID:23251032

  15. Migration of cells in a social context.

    PubMed

    Vedel, Søren; Tay, Savaş; Johnston, Darius M; Bruus, Henrik; Quake, Stephen R

    2013-01-02

    In multicellular organisms and complex ecosystems, cells migrate in a social context. Whereas this is essential for the basic processes of life, the influence of neighboring cells on the individual remains poorly understood. Previous work on isolated cells has observed a stereotypical migratory behavior characterized by short-time directional persistence with long-time random movement. We discovered a much richer dynamic in the social context, with significant variations in directionality, displacement, and speed, which are all modulated by local cell density. We developed a mathematical model based on the experimentally identified "cellular traffic rules" and basic physics that revealed that these emergent behaviors are caused by the interplay of single-cell properties and intercellular interactions, the latter being dominated by a pseudopod formation bias mediated by secreted chemicals and pseudopod collapse following collisions. The model demonstrates how aspects of complex biology can be explained by simple rules of physics and constitutes a rapid test bed for future studies of collective migration of individual cells.

  16. On the origin of the water vapor continuum absorption within rotational and fundamental vibrational bands

    NASA Astrophysics Data System (ADS)

    Serov, E. A.; Odintsova, T. A.; Tretyakov, M. Yu.; Semenov, V. E.

    2017-05-01

    Analysis of the continuum absorption in water vapor at room temperature within the purely rotational and fundamental ro-vibrational bands shows that a significant part (up to a half) of the observed absorption cannot be explained within the framework of the existing concepts of the continuum. Neither of the two most prominent mechanisms of continuum originating, namely, the far wings of monomer lines and the dimers, cannot reproduce the currently available experimental data adequately. We propose a new approach to developing a physically based model of the continuum. It is demonstrated that water dimers and wings of monomer lines may contribute equally to the continuum within the bands, and their contribution should be taken into account in the continuum model. We propose a physical mechanism giving missing justification for the super-Lorentzian behavior of the intermediate line wing. The qualitative validation of the proposed approach is given on the basis of a simple empirical model. The obtained results are directly indicative of the necessity to reconsider the existing line wing theory and can guide this consideration.

  17. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  18. Dark energy, antimatter gravity and geometry of the Universe

    NASA Astrophysics Data System (ADS)

    Hajdukovic, Dragan Slavkov

    2010-11-01

    This article is based on two hypotheses. The first one is the existence of the gravitational repulsion between particles and antiparticles. Consequently, virtual particle-antiparticle pairs in the quantum vacuum might be considered as gravitational dipoles. The second hypothesis is that the Universe has geometry of a four-dimensional hyper-spherical shell with thickness equal to the Compton wavelength of a pion, which is a simple generalization of the usual geometry of a 3-hypersphere. It is striking that these two hypotheses lead to a simple relation for the gravitational mass density of the vacuum, which is in very good agreement with the observed dark energy density. It might be a sign that QCD fields provide the largest contribution to the gravitational mass of the physical vacuum; contrary to the prediction of the Standard Model that QCD contribution is much smaller than some other contributions.

  19. Using Performance Assessment Model in Physics Laboratory to Increase Students’ Critical Thinking Disposition

    NASA Astrophysics Data System (ADS)

    Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.

    2017-09-01

    Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.

  20. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  1. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  2. Learning in Structured Connectionist Networks

    DTIC Science & Technology

    1988-04-01

    the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive

  3. Database and new models based on a group contribution method to predict the refractive index of ionic liquids.

    PubMed

    Wang, Xinxin; Lu, Xingmei; Zhou, Qing; Zhao, Yongsheng; Li, Xiaoqian; Zhang, Suojiang

    2017-08-02

    Refractive index is one of the important physical properties, which is widely used in separation and purification. In this study, the refractive index data of ILs were collected to establish a comprehensive database, which included about 2138 pieces of data from 1996 to 2014. The Group Contribution-Artificial Neural Network (GC-ANN) model and Group Contribution (GC) method were employed to predict the refractive index of ILs at different temperatures from 283.15 K to 368.15 K. Average absolute relative deviations (AARD) of the GC-ANN model and the GC method were 0.179% and 0.628%, respectively. The results showed that a GC-ANN model provided an effective way to estimate the refractive index of ILs, whereas the GC method was simple and extensive. In summary, both of the models were accurate and efficient approaches for estimating refractive indices of ILs.

  4. >From individual choice to group decision-making

    NASA Astrophysics Data System (ADS)

    Galam, Serge; Zucker, Jean-Daniel

    2000-12-01

    Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions.

  5. Regular network model for the sea ice-albedo feedback in the Arctic.

    PubMed

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.

  6. Where and why hyporheic exchange is important: Inferences from a parsimonious, physically-based river network model

    NASA Astrophysics Data System (ADS)

    Gomez-Velez, J. D.; Harvey, J. W.

    2014-12-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.

  7. Stereoisomeric effects on dynamic viscosity versus pressure and temperature for the system cis- + trans-decalin

    NASA Astrophysics Data System (ADS)

    Miyake, Yasufumi; Boned, Christian; Baylaucq, Antoine; Bessières, David; Zéberg-Mikkelsen, Claus K.; Galliéro, Guillaume; Ushiki, Hideharu

    2007-07-01

    In order to study the influence of stereoisomeric effects on the dynamic viscosity, an extensive experimental study of the viscosity of the binary system composed of the two stereoisomeric molecular forms of decalin - cis and trans - has been carried out for five different mixtures at three temperatures (303.15, 323.15 and 343.15) K and six isobars up to 100 MPa with a falling-body viscometer (a total of 90 points). The experimental relative uncertainty is estimated to be 2%. The variations of dynamic viscosity versus composition are discussed with respect to their behavior due to stereoisomerism. Four different models with a physical and theoretical background are studied in order to investigate how they take the stereoisomeric effect into account through their required model parameters. The evaluated models are based on the hard-sphere scheme, the concepts of the free-volume and the friction theory, and a model derived from molecular dynamics. Overall, a satisfactory representation of the viscosity of this binary system is found for the different models within the considered ( T, p) range taken into account their simplicity. All the models are able to distinguish between the two stereoisomeric decalin compounds. Further, based on the analysis of the model parameters performed on the pure compounds, it has been found that the use of simple mixing rules without introducing any binary interaction parameters are sufficient in order to predict the viscosity of cis + trans-decalin mixtures with the same accuracy in comparison with the experimental values as obtained for the pure compounds. In addition to these models, a semi-empirical self-referencing model and the simple mixing laws of Grunberg-Nissan and Katti-Chaudhri are also applied in the representation of the viscosity behavior of these systems.

  8. A Simple Exploration of Complexity at the Climate-Weather-Social-Conflict Nexus

    NASA Astrophysics Data System (ADS)

    Shaw, M.

    2017-12-01

    The conceptualization, exploration, and prediction of interplay between climate, weather, important resources, and social and economic - so political - human behavior is cast, and analyzed, in terms familiar from statistical physics and nonlinear dynamics. A simple threshold toy model is presented which emulates human tendencies to either actively engage in responses deriving, in part, from environmental circumstances or to maintain some semblance of status quo, formulated based on efforts drawn from the sociophysics literature - more specifically vis a vis a model akin to spin glass depictions of human behavior - with threshold/switching of individual and collective dynamics influenced by relatively more detailed weather and land surface model (hydrological) analyses via a land data assimilation system (a custom rendition of the NASA GSFC Land Information System). Parameters relevant to human systems' - e.g., individual and collective switching - sensitivity to hydroclimatology are explored towards investigation of overall system behavior; i.e., fixed points/equilibria, oscillations, and bifurcations of systems composed of human interactions and responses to climate and weather through, e.g., agriculture. We discuss implications in terms of conceivable impacts of climate change and associated natural disasters on socioeconomics, politics, and power transfer, drawing from relatively recent literature concerning human conflict.

  9. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  10. 'On Your Feet to Earn Your Seat', a habit-based intervention to reduce sedentary behaviour in older adults: study protocol for a randomized controlled trial.

    PubMed

    Gardner, Benjamin; Thuné-Boyle, Ingela; Iliffe, Steve; Fox, Kenneth R; Jefferis, Barbara J; Hamer, Mark; Tyler, Nick; Wardle, Jane

    2014-09-20

    Many older adults are both highly sedentary (that is, spend considerable amounts of time sitting) and physically inactive (that is, do little physical activity). This protocol describes an exploratory trial of a theory-based behaviour change intervention in the form of a booklet outlining simple activities ('tips') designed both to reduce sedentary behaviour and to increase physical activity in older adults. The intervention is based on the 'habit formation' model, which proposes that consistent repetition leads to behaviour becoming automatic, sustaining activity gains over time. The intervention is being developed iteratively, in line with Medical Research Council complex intervention guidelines. Selection of activity tips was informed by semi-structured interviews and focus groups with older adults, and input from a multidisciplinary expert panel. An ongoing preliminary field test of acceptability among 25 older adults will inform further refinement. An exploratory randomized controlled trial will be conducted within a primary care setting, comparing the tips booklet with a control fact sheet. Retired, inactive and sedentary adults (n = 120) aged 60 to 74 years, with no physical impairments precluding light physical activity, will be recruited from general practices in north London, UK. The primary outcomes are recruitment and attrition rates. Secondary outcomes are changes in behaviour, habit, health and wellbeing over 12 weeks. Data will be used to inform study procedures for a future, larger-scale definitive randomized controlled trial. Current Controlled Trials ISRCTN47901994.

  11. Importance of physical and hydraulic characteristics to unionid mussels: A retrospective analysis in a reach of large river

    USGS Publications Warehouse

    Zigler, S.J.; Newton, T.J.; Steuer, J.J.; Bartsch, M.R.; Sauer, J.S.

    2008-01-01

    Interest in understanding physical and hydraulic factors that might drive distribution and abundance of freshwater mussels has been increasing due to their decline throughout North America. We assessed whether the spatial distribution of unionid mussels could be predicted from physical and hydraulic variables in a reach of the Upper Mississippi River. Classification and regression tree (CART) models were constructed using mussel data compiled from various sources and explanatory variables derived from GIS coverages. Prediction success of CART models for presence-absence of mussels ranged from 71 to 76% across three gears (brail, sled-dredge, and dive-quadrat) and 51% of the deviance in abundance. Models were largely driven by shear stress and substrate stability variables, but interactions with simple physical variables, especially slope, were also important. Geospatial models, which were based on tree model results, predicted few mussels in poorly connected backwater areas (e.g., floodplain lakes) and the navigation channel, whereas main channel border areas with high geomorphic complexity (e.g., river bends, islands, side channel entrances) and small side channels were typically favorable to mussels. Moreover, bootstrap aggregation of discharge-specific regression tree models of dive-quadrat data indicated that variables measured at low discharge were about 25% more predictive (PMSE = 14.8) than variables measured at median discharge (PMSE = 20.4) with high discharge (PMSE = 17.1) variables intermediate. This result suggests that episodic events such as droughts and floods were important in structuring mussel distributions. Although the substantial mussel and ancillary data in our study reach is unusual, our approach to develop exploratory statistical and geospatial models should be useful even when data are more limited. ?? 2007 Springer Science+Business Media B.V.

  12. A physical model for strain accumulation in the San Francisco Bay region: Stress evolution since 1838

    USGS Publications Warehouse

    Pollitz, F.; Bakun, W.H.; Nyst, M.

    2004-01-01

    Understanding of the behavior of plate boundary zones has progressed to the point where reasonably comprehensive physical models can predict their evolution. The San Andreas fault system in the San Francisco Bay region (SFBR) is dominated by a few major faults whose behavior over about one earthquake cycle is fairly well understood. By combining the past history of large ruptures on SFBR faults with a recently proposed physical model of strain accumulation in the SFBR, we derive the evolution of regional stress from 1838 until the present. This effort depends on (1) an existing compilation of the source properties of historic and contemporary SFBR earthquakes based on documented shaking, geodetic data, and seismic data (Bakun, 1999) and (2) a few key parameters of a simple regional viscoelastic coupling model constrained by recent GPS data (Pollitz and Nyst, 2004). Although uncertainties abound in the location, magnitude, and fault geometries of historic ruptures and the physical model relies on gross simplifications, the resulting stress evolution model is sufficiently detailed to provide a useful window into the past stress history. In the framework of Coulomb failure stress, we find that virtually all M ??? 5.8 earthquakes prior to 1906 and M ??? 5.5 earthquakes after 1906 are consistent with stress triggering from previous earthquakes. These events systematically lie in zones of predicted stress concentration elevated 5-10 bars above the regional average. The SFBR is predicted to have emerged from the 1906 "shadow" in about 1980, consistent with the acceleration in regional seismicity at that time. The stress evolution model may be a reliable indicator of the most likely areas to experience M ??? 5.5 shocks in the future.

  13. A model of cell-wall dynamics during sporulation in Bacillus subtilis

    NASA Astrophysics Data System (ADS)

    Yap, Li-Wei; Endres, Robert G.

    To survive starvation, Bacillus subtilis forms durable spores. After asymmetric cell division, the septum grows around the forespore in a process called engulfment, but the mechanism of force generation is unknown. Here, we derived a novel biophysical model for the dynamics of cell-wall remodeling during engulfment based on a balancing of dissipative, active, and mechanical forces. By plotting phase diagrams, we predict that sporulation is promoted by a line tension from the attachment of the septum to the outer cell wall, as well as by an imbalance in turgor pressures in the mother-cell and forespore compartments. We also predict that significant mother-cell growth hinders engulfment. Hence, relatively simple physical principles may guide this complex biological process.

  14. A toy model for the yield of a tamped fission bomb

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-02-01

    A simple expression is developed for estimating the yield of a tamped fission bomb, that is, a basic nuclear weapon comprising a fissile core jacketed by a surrounding neutron-reflecting tamper. This expression is based on modeling the nuclear chain reaction as a geometric progression in combination with a previously published expression for the threshold-criticality condition for such a core. The derivation is especially straightforward, as it requires no knowledge of diffusion theory and should be accessible to students of both physics and policy. The calculation can be set up as a single page spreadsheet. Application to the Little Boy and Fat Man bombs of World War II gives results in reasonable accord with published yield estimates for these weapons.

  15. Rheology of dilute suspensions of red blood cells: experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Drochon, A.

    2003-05-01

    Shear viscosity measurements with dilute suspensions of red blood cells are interpreted using a microrheological model that relates the bulk measurements to the physical properties of the suspended cells. It is thus possible to quantify the average deformability of a RBC population in terms of a mean value of the membrane shear elastic modulus E_s. The values obtained for normal cells are in good agreement with those given in the literature. The method allows to discriminate between normal and altered (diamide or glutaraldehyde treated) cells or pathological cells (scleroderma). The predictions of the microrheological model, based on analytic calculations, are also compared with the numerical results of Ramanujan and Pozrikidis (JFM 361, 1998) for dilute suspensions of capsules in simple shear flow.

  16. Scheduling observational and physical practice: influence on the coding of simple motor sequences.

    PubMed

    Ellenbuerger, Thomas; Boutin, Arnaud; Blandin, Yannick; Shea, Charles H; Panzer, Stefan

    2012-01-01

    The main purpose of the present experiment was to determine the coordinate system used in the development of movement codes when observational and physical practice are scheduled across practice sessions. The task was to reproduce a 1,300-ms spatial-temporal pattern of elbow flexions and extensions. An intermanual transfer paradigm with a retention test and two effector (contralateral limb) transfer tests was used. The mirror effector transfer test required the same pattern of homologous muscle activation and sequence of limb joint angles as that performed or observed during practice, and the non-mirror effector transfer test required the same spatial pattern movements as that performed or observed. The test results following the first acquisition session replicated the findings of Gruetzmacher, Panzer, Blandin, and Shea (2011) . The results following the second acquisition session indicated a strong advantage for participants who received physical practice in both practice sessions or received observational practice followed by physical practice. This advantage was found on both the retention and the mirror transfer tests compared to the non-mirror transfer test. These results demonstrate that codes based in motor coordinates can be developed relatively quickly and effectively for a simple spatial-temporal movement sequence when participants are provided with physical practice or observation followed by physical practice, but physical practice followed by observational practice or observational practice alone limits the development of codes based in motor coordinates.

  17. Fermion number anomaly with the fluffy mirror fermion

    NASA Astrophysics Data System (ADS)

    Okumura, Ken-ichi; Suzuki, Hiroshi

    2016-12-01

    Quite recently, Grabowska and Kaplan presented a 4-dimensional lattice formulation of chiral gauge theories based on the chiral overlap operator. We study this formulation from the perspective of the fermion number anomaly and possible associated phenomenology. A simple argument shows that the consistency of the formulation implies that the fermion with the opposite chirality to the physical one, the "fluffy mirror fermion" or "fluff", suffers from the fermion number anomaly in the same magnitude (with the opposite sign) as the physical fermion. This immediately shows that if at least one of the fluff quarks is massless, the formulation provides a simple viable solution to the strong CP problem. Also, if the fluff interacts with gravity essentially in the same way as the physical fermion, the formulation can realize the asymmetric dark matter scenario.

  18. Theoretical research of helium pulsating heat pipe under steady state conditions

    NASA Astrophysics Data System (ADS)

    Xu, D.; Liu, H. M.; Li, L. F.; Huang, R. J.; Wang, W.

    2015-12-01

    As a new-type heat pipe, pulsating heat pipe (PHP) has several outstanding features, such as great heat transport ability, strong adjustability, small size and simple construction. PHP is a complex two-phase flow system associated with many physical subjects and parameters, which utilizes the pressure and temperature changes in volume expansion and contraction during phase changes to excite the pulsation motion of liquid plugs and vapor bubbles in the capillary tube between the evaporator and the condenser. At present time, some experimental investigation of helium PHP have been done. However, theoretical research of helium PHP is rare. In this paper, the physical and mathematical models of operating mechanism for helium PHP under steady state are established based on the conservation of mass, momentum, and energy. Several important parameters are correlated and solved, including the liquid filling ratio, flow velocity, heat power, temperature, etc. Based on the results, the operational driving force and flow resistances of helium PHP are analysed, and the flow and heat transfer is further studied.

  19. Mathematical modeling of high-pH chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuyan, D.; Lake, L.W.; Pope, G.A.

    1990-05-01

    This paper describes a generalized compositional reservoir simulator for high-pH chemical flooding processes. This simulator combines the reaction chemistry associated with these processes with the extensive physical- and flow-property modeling schemes of an existing micellar/polymer flood simulator, UTCHEM. Application of the model is illustrated for cases from a simple alkaline preflush to surfactant-enhanced alkaline-polymer flooding.

  20. Simple Physical Model for the Probability of a Subduction- Zone Earthquake Following Slow Slip Events and Earthquakes: Application to the Hikurangi Megathrust, New Zealand

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.

    2018-05-01

    Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.

  1. Theoretical aspects of tidal and planetary wave propagation at thermospheric heights

    NASA Technical Reports Server (NTRS)

    Volland, H.; Mayr, H. G.

    1977-01-01

    A simple semiquantitative model is presented which allows analytic solutions of tidal and planetary wave propagation at thermospheric heights. This model is based on perturbation approximation and mode separation. The effects of viscosity and heat conduction are parameterized by Rayleigh friction and Newtonian cooling. Because of this simplicity, one gains a clear physical insight into basic features of atmospheric wave propagation. In particular, we discuss the meridional structures of pressure and horizontal wind (the solutions of Laplace's equation) and their modification due to dissipative effects at thermospheric heights. Furthermore, we solve the equations governing the height structure of the wave modes and arrive at a very simple asymptotic solution valid in the upper part of the thermosphere. That 'system transfer function' of the thermosphere allows one to estimate immediately the reaction of the thermospheric wave mode parameters such as pressure, temperature, and winds to an external heat source of arbitrary temporal and spatial distribution. Finally, the diffusion effects of the minor constituents due to the global wind circulation are discussed, and some results of numerical calculations are presented.

  2. Application of synthetic scenarios to address water resource concerns: A management-guided case study from the Upper Colorado River Basin

    USGS Publications Warehouse

    McAfee, Stephanie A.; Pederson, Gregory T.; Woodhouse, Connie A.; McCabe, Gregory

    2017-01-01

    Water managers are increasingly interested in better understanding and planning for projected resource impacts from climate change. In this management-guided study, we use a very large suite of synthetic climate scenarios in a statistical modeling framework to simultaneously evaluate how (1) average temperature and precipitation changes, (2) initial basin conditions, and (3) temporal characteristics of the input climate data influence water-year flow in the Upper Colorado River. The results here suggest that existing studies may underestimate the degree of uncertainty in future streamflow, particularly under moderate temperature and precipitation changes. However, we also find that the relative severity of future flow projections within a given climate scenario can be estimated with simple metrics that characterize the input climate data and basin conditions. These results suggest that simple testing, like the analyses presented in this paper, may be helpful in understanding differences between existing studies or in identifying specific conditions for physically based mechanistic modeling. Both options could reduce overall cost and improve the efficiency of conducting climate change impacts studies.

  3. A simple model of entropy relaxation for explaining effective activation energy behavior below the glass transition temperature.

    PubMed

    Bisquert, Juan; Henn, François; Giuntini, Jean-Charles

    2005-03-01

    Strong changes in relaxation rates observed at the glass transition region are frequently explained in terms of a physical singularity of the molecular motions. We show that the unexpected trends and values for activation energy and preexponential factor of the relaxation time tau, obtained at the glass transition from the analysis of the thermally stimulated current signal, result from the use of the Arrhenius law for treating the experimental data obtained in nonstationary experimental conditions. We then demonstrate that a simple model of structural relaxation based on a time dependent configurational entropy and Adam-Gibbs relaxation time is sufficient to explain the experimental behavior, without invoking a kinetic singularity at the glass transition region. The pronounced variation of the effective activation energy appears as a dynamic signature of entropy relaxation that governs the change of relaxation time in nonstationary conditions. A connection is demonstrated between the peak of apparent activation energy measured in nonequilibrium dielectric techniques, with the overshoot of the dynamic specific heat that is obtained in calorimetry techniques.

  4. The Magnetic Field along the Axis of a Short, Thick Solenoid

    ERIC Educational Resources Information Center

    Hart, Francis Xavier

    2018-01-01

    We commonly ask students to compare the results of their experimental measurements with the predictions of a simple physical model that is well understood. However, in practice, physicists must compare their experimental measurements with the predictions of several models, none of which may work well over the entire range of measurements. The…

  5. Alternative Tsunami Models

    ERIC Educational Resources Information Center

    Tan, A.; Lyatskaya, I.

    2009-01-01

    The interesting papers by Margaritondo (2005 "Eur. J. Phys." 26 401) and by Helene and Yamashita (2006 "Eur. J. Phys." 27 855) analysed the great Indian Ocean tsunami of 2004 using a simple one-dimensional canal wave model, which was appropriate for undergraduate students in physics and related fields of discipline. In this paper, two additional,…

  6. Simple model for multiple-choice collective decision making

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n ≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E . We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  7. Slithering on sand: kinematics and controls for success on granular media

    NASA Astrophysics Data System (ADS)

    Schiebel, Perrin E.; Zhang, Tingnan; Dai, Jin; Gong, Chaohui; Yu, Miao; Astley, Henry C.; Travers, Matthew; Choset, Howie; Goldman, Daniel I.

    Previously, we studied the subsurfacelocomotion of undulatory sand-swimming snakes and lizards; using empirical drag response of GM to subsurface intrusion of simple objects allowed us to develop a granular resistive force theory (RFT) to model the locomotion and predict optimal movement patterns. However, our knowledge of the physics of GM at the surface is limited; this makes it impossible to determine how the desert-dwelling snake C. occipitalis moves effectively (0.45 +/-0.04 bodylengths/sec) on the surface of sand .We combine organism biomechanics studies, GM drag experiments, RFT calculations and tests of a physical model (a snake-like robot), to reveal how multiple factors acting together contribute to slithering on sandy surfaces. These include the kinematics--targeting an ideal waveform which maximizes speed while minimizing joint-level torque, the ability to modulate ground interactions by lifting body segments, and the properties of the GM. Based on the sensitive nature of the relationship between these factors, we hypothesize that having an element of force-based control, where the waveform is modulated in response to the forces acting between the body and the environment, is necessary for successful locomotion on yielding substrates.

  8. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  9. Students' use of atomic and molecular models in learning chemistry

    NASA Astrophysics Data System (ADS)

    O'Connor, Eileen Ann

    1997-09-01

    The objective of this study was to investigate the development of introductory college chemistry students' use of atomic and molecular models to explain physical and chemical phenomena. The study was conducted during the first semester of the course at a University and College II. Public institution (Carnegie Commission of Higher Education, 1973). Students' use of models was observed during one-on-one interviews conducted over the course of the semester. The approach to introductory chemistry emphasized models. Students were exposed to over two-hundred and fifty atomic and molecular models during lectures, were assigned text readings that used over a thousand models, and worked interactively with dozens of models on the computer. These models illustrated various features of the spatial organization of valence electrons and nuclei in atoms and molecules. Despite extensive exposure to models in lectures, in textbook, and in computer-based activities, the students in the study based their explanation in large part on a simple Bohr model (electrons arranged in concentric circles around the nuclei)--a model that had not been introduced in the course. Students used visual information from their models to construct their explanation, while overlooking inter-atomic and intra-molecular forces which are not represented explicitly in the models. In addition, students often explained phenomena by adding separate information about the topic without either integrating or logically relating this information into a cohesive explanation. The results of the study demonstrate that despite the extensive use of models in chemistry instruction, students do not necessarily apply them appropriately in explaining chemical and physical phenomena. The results of this study suggest that for the power of models as aids to learning to be more fully realized, chemistry professors must give more attention to the selection, use, integration, and limitations of models in their instruction.

  10. Mathematical study on robust tissue pattern formation in growing epididymal tubule.

    PubMed

    Hirashima, Tsuyoshi

    2016-10-21

    Tissue pattern formation during development is a reproducible morphogenetic process organized by a series of kinetic cellular activities, leading to the building of functional and stable organs. Recent studies focusing on mechanical aspects have revealed physical mechanisms on how the cellular activities contribute to the formation of reproducible tissue patterns; however, the understanding for what factors achieve the reproducibility of such patterning and how it occurs is far from complete. Here, I focus on a tube pattern formation during murine epididymal development, and show that two factors influencing physical design for the patterning, the proliferative zone within the tubule and the viscosity of tissues surrounding to the tubule, control the reproducibility of epididymal tubule pattern, using a mathematical model based on experimental data. Extensive numerical simulation of the simple mathematical model revealed that a spatially localized proliferative zone within the tubule, observed in experiments, results in more reproducible tubule pattern. Moreover, I found that the viscosity of tissues surrounding to the tubule imposes a trade-off regarding pattern reproducibility and spatial accuracy relating to the region where the tubule pattern is formed. This indicates an existence of optimality in material properties of tissues for the robust patterning of epididymal tubule. The results obtained by numerical analysis based on experimental observations provide a general insight on how physical design realizes robust tissue pattern formation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Robust Strategy for Rocket Engine Health Monitoring

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    2001-01-01

    Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.

  12. Hadronic Octaves: Symphony in Treble Clef

    NASA Astrophysics Data System (ADS)

    Ne'eman, Yuval

    2002-06-01

    Pythagoreanism, as derived from the physics of music, an artificial quantized system, involved simple ratios between integers and was conjectured by the Pythagoreans to extend to the whole of physics (the Music of the Spheres). It hit the jackpot in 1895 with Balmer's formula and has dominated XXth Century physics, with its Quantum Foundations. I review the history of Hadron Spectroscopy and my personal role in 1958-1964, i.e. (1) my 1960 discovery of SU(3) symmetry with an octet assignment for the j = 1/2 baryons (independently reached somewhat later by M. Gell-Mann), and (2) in 1961 (with H. Goldberg) my mathematical construction of a structural model which was then developed into the physical quark model by Gell-Mann and Zweig.

  13. Commensurability-driven structural defects in double emulsions produced with two-step microfluidic techniques.

    PubMed

    Schmit, Alexandre; Salkin, Louis; Courbin, Laurent; Panizza, Pascal

    2014-07-14

    The combination of two drop makers such as flow focusing geometries or ┬ junctions is commonly used in microfluidics to fabricate monodisperse double emulsions and novel fluid-based materials. Here we investigate the physics of the encapsulation of small droplets inside large drops that is at the core of such processes. The number of droplets per drop studied over time for large sequences of consecutive drops reveals that the dynamics of these systems are complex: we find a succession of well-defined elementary patterns and defects. We present a simple model based on a discrete approach that predicts the nature of these patterns and their non-trivial scheme of arrangement in a sequence as a function of the ratio of the two timescales of the problem, the production times of droplets and drops. Experiments validate our model as they concur very well with predictions.

  14. On Two-Scale Modelling of Heat and Mass Transfer

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.

    2008-09-01

    Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.

  15. A study of amplitude information-frequency characteristics for underwater active electrolocation system.

    PubMed

    Peng, Jiegang

    2015-11-04

    Weakly electric fish sense their surroundings in complete darkness by their active electrolocation system. For biologists, the active electrolocation system has been investigated for near 60 years. And for engineers, bio-inspired active electrolocation sensor has been investigated for about 20 years. But how the amplitude information response will be affected by frequencies of detecting electric fields in the active electrolocation system was rarely investigated. In this paper, an electrolocation experiment system has been built. The amplitude information-frequency characteristics (AIFC) of the electrolocation system for sinusoidal electric fields of varying frequencies have been investigated. We find that AIFC of the electrolocation system have relevance to the material properties and geometric features of the probed object and conductivity of surrounding water. Detect frequency dead zone (DFDZ) and frequency inflection point (FIP) of AIFC for the electrolocation system were found. The analysis model of the electrolocation system has been investigated for many years, but DFDZ and FIP of AIFC can be difficult to explain by those models. In order to explain those AIFC phenomena for the electrolocation system, a simple relaxation model based on Cole-Cole model which is not only a mathematical explanation but it is a physical one for the electrolocation system was advanced. We also advance a hypothesis for physical mechanism of weakly electrical fish electrolocation system. It may have reference value for physical mechanism of weakly electrical fish active electrolocation system.

  16. The development of acoustic experiments for off-campus teaching and learning

    NASA Astrophysics Data System (ADS)

    Wild, Graham; Swan, Geoff

    2011-05-01

    In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine software available for free, online. A small number of applications were compared in terms of their interface and functionality, for both the DSO and the FG. The software was then used to investigate standing waves in pipes using the computer-based DSO. Standing wave theory taught in high school and in first year physics is based on a one-dimensional model. With the use of the DSO's fast Fourier transform function, the experimental uncertainly alone was not sufficient to account for the difference observed between the measure and the calculated frequencies. Hence the original experiment was expanded upon to include the end correction effect. The DSO was also used for other simple acoustics experiments, in areas such as the physics of music.

  17. Flavorful Z‧ signatures at LHC and ILC

    NASA Astrophysics Data System (ADS)

    Chen, Shao-Long; Okada, Nobuchika

    2008-10-01

    There are lots of new physics models which predict an extra neutral gauge boson, referred as Z‧-boson. In a certain class of these new physics models, the Z‧-boson has flavor-dependent couplings with the fermions in the Standard Model (SM). Based on a simple model in which couplings of the SM fermions in the third generation with the Z‧-boson are different from those of the corresponding fermions in the first two generations, we study the signatures of Z‧-boson at the Large Hadron Collider (LHC) and the International Linear Collider (ILC). We show that at the LHC, the Z‧-boson with mass around 1 TeV can be produced through the Drell-Yan processes and its dilepton decay modes provide us clean signatures not only for the resonant production of Z‧-boson but also for flavor-dependences of the production cross sections. We also study fermion pair productions at the ILC involving the virtual Z‧-boson exchange. Even though the center-of-energy of the ILC is much lower than a Z‧-boson mass, the angular distributions and the forward-backward asymmetries of fermion pair productions show not only sizable deviations from the SM predictions but also significant flavor-dependences.

  18. Novel Use of Natural Language Processing (NLP) to Predict Suicidal Ideation and Psychiatric Symptoms in a Text-Based Mental Health Intervention in Madrid.

    PubMed

    Cook, Benjamin L; Progovac, Ana M; Chen, Pei; Mullin, Brian; Hou, Sherry; Baca-Garcia, Enrique

    2016-01-01

    Natural language processing (NLP) and machine learning were used to predict suicidal ideation and heightened psychiatric symptoms among adults recently discharged from psychiatric inpatient or emergency room settings in Madrid, Spain. Participants responded to structured mental and physical health instruments at multiple follow-up points. Outcome variables of interest were suicidal ideation and psychiatric symptoms (GHQ-12). Predictor variables included structured items (e.g., relating to sleep and well-being) and responses to one unstructured question, "how do you feel today?" We compared NLP-based models using the unstructured question with logistic regression prediction models using structured data. The PPV, sensitivity, and specificity for NLP-based models of suicidal ideation were 0.61, 0.56, and 0.57, respectively, compared to 0.73, 0.76, and 0.62 of structured data-based models. The PPV, sensitivity, and specificity for NLP-based models of heightened psychiatric symptoms (GHQ-12 ≥ 4) were 0.56, 0.59, and 0.60, respectively, compared to 0.79, 0.79, and 0.85 in structured models. NLP-based models were able to generate relatively high predictive values based solely on responses to a simple general mood question. These models have promise for rapidly identifying persons at risk of suicide or psychological distress and could provide a low-cost screening alternative in settings where lengthy structured item surveys are not feasible.

  19. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  20. Managing and capturing the physics of robotic systems

    NASA Astrophysics Data System (ADS)

    Werfel, Justin

    Algorithmic and other theoretical analyses of robotic systems often use a discretized or otherwise idealized framework, while the real world is continuous-valued and noisy. This disconnect can make theoretical work sometimes problematic to apply successfully to real-world systems. One approach to bridging the separation can be to design hardware to take advantage of simple physical effects mechanically, in order to guide elements into a desired set of discrete attracting states. As a result, the system behavior can effectively approximate a discretized formalism, so that proofs based on an idealization remain directly relevant, while control can be made simpler. It is important to note, conversely, that such an approach does not make a physical instantiation unnecessary nor a purely theoretical treatment sufficient. Experiments with hardware in practice always reveal physical effects not originally accounted for in simulation or analytic modeling, which lead to unanticipated results and require nontrivial modifications to control algorithms in order to achieve desired outcomes. I will discuss these points in the context of swarm robotic systems recently developed at the Self-Organizing Systems Research Group at Harvard.

  1. Hint of Universal Law for the Financial Gains of Competitive Sport Teams. The case of Tour de France cycle race.

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel

    2017-12-01

    This short note is intended as a "Letter to the Editor" Perspective in order that it serves as a contribution, in view of reaching the physics community caring about rare events and scaling laws and unexpected findings, on a domain of wide interest: sport and money. It is apparent from the data reported and discussed below that the scarcity of such data does not allow to recommend a complex elaboration of an agent based model, - at this time. In some sense, this also means that much data on sport activities is not necessarily given in terms of physics prone materials, but it could be, and would then attract much attention. Nevertheless the findings tie the data to well known scaling laws and physics processes. It is found that a simple scaling law describes the gains of teams in recent bicycle races, like the Tour de France. An analogous case, ranking teams in Formula 1 races, is shown in an Appendix

  2. Modeling the internal combustion engine

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1985-01-01

    A flexible and computationally economical model of the internal combustion engine was developed for use on large digital computer systems. It is based on a system of ordinary differential equations for cylinder-averaged properties. The computer program is capable of multicycle calculations, with some parameters varying from cycle to cycle, and has restart capabilities. It can accommodate a broad spectrum of reactants, permits changes in physical properties, and offers a wide selection of alternative modeling functions without any reprogramming. It readily adapts to the amount of information available in a particular case because the model is in fact a hierarchy of five models. The models range from a simple model requiring only thermodynamic properties to a complex model demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. Among its many features the model includes heat transfer, valve timing, supercharging, motoring, finite burning rates, cycle-to-cycle variations in air-fuel ratio, humid air, residual and recirculated exhaust gas, and full combustion kinetics.

  3. A hierarchy for modeling high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Deabreu, Alex

    1991-01-01

    General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.

  4. Perceived Social Support Among People With Physical Disability

    PubMed Central

    Setareh Forouzan, Ameneh; Mahmoodi, Abolfazl; Jorjoran Shushtari, Zahra; Salimi, Yahya; Sajjadi, Homeira; Mahmoodi, Zohreh

    2013-01-01

    Background Disability is more based on social, rather than medical aspects. Lack of attention and social support may impact on participation of people with physical disability in various aspects and their return to normal life in the society. Objectives This study was conducted to determine perceived social support and related factors among physically disabled in the city of Tehran. Patients and Methods This cross-sectional study by using simple random sampling was conducted on 136 people with physically disabled who were covered by Welfare Organization of Tehran. The Norbeck social support questionnaire was used .Multiple linear regression analysis with the backward method was used to identify the adjusted association between perceived social support as dependent variable and demographic variables as independent variables. Results The present sample comprised of 68 (50%) male and 68 (50%) female with the mean age of 33 (SD = 8.9) years. Based on the results, mean of functional support was 135. 57 (SD = 98.77) and mean of structural support was 77.37 (SD = 52.37). Regression analysis model, demonstrates that variables of age and marital status remained in the model as significant predictors of functional support (P = 0.003, P = 0.004, respectively) and structural support (P = 0.002, P = 0.006, respectively). Conclusions Based on the results, participants in the study didn’t have favorable status with respect to perceived social support (in all dimensions) from their social network members. While, social support as one of the social determinants of health, plays an important role in improving psychological conditions in people’s lives; therefore, being aware of social support and designing effective interventions to improve it for the disabled is very important. PMID:24578832

  5. A reappraisal of drug release laws using Monte Carlo simulations: the prevalence of the Weibull function.

    PubMed

    Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos

    2003-07-01

    To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.

  6. Comparative Research Productivity Measures for Economic Departments.

    ERIC Educational Resources Information Center

    Huettner, David A.; Clark, William

    1997-01-01

    Develops a simple theoretical model to evaluate interdisciplinary differences in research productivity between economics departments and related subjects. Compares the research publishing statistics of economics, finance, psychology, geology, physics, oceanography, chemistry, and geophysics. Considers a number of factors including journal…

  7. Unconditionally Secure Credit/Debit Card Chip Scheme and Physical Unclonable Function

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Entesari, Kamran; Granqvist, Claes-Göran; Kwan, Chiman

    The statistical-physics-based Kirchhoff-law-Johnson-noise (KLJN) key exchange offers a new and simple unclonable system for credit/debit card chip authentication and payment. The key exchange, the authentication and the communication are unconditionally secure so that neither mathematics- nor statistics-based attacks are able to crack the scheme. The ohmic connection and the short wiring lengths between the chips in the card and the terminal constitute an ideal setting for the KLJN protocol, and even its simplest versions offer unprecedented security and privacy for credit/debit card chips and applications of physical unclonable functions (PUFs).

  8. Some Key Issues in Creating Inquiry-Based Instructional Practices that Aim at the Understanding of Simple Electric Circuits

    NASA Astrophysics Data System (ADS)

    Kock, Zeger-Jan; Taconis, Ruurd; Bolhuis, Sanneke; Gravemeijer, Koeno

    2013-04-01

    Many students in secondary schools consider the sciences difficult and unattractive. This applies to physics in particular, a subject in which students attempt to learn and understand numerous theoretical concepts, often without much success. A case in point is the understanding of the concepts current, voltage and resistance in simple electric circuits. In response to these problems, reform initiatives in education strive for a change of the classroom culture, putting emphasis on more authentic contexts and student activities containing elements of inquiry. The challenge then becomes choosing and combining these elements in such a manner that they foster an understanding of theoretical concepts. In this article we reflect on data collected and analyzed from a series of 12 grade 9 physics lessons on simple electric circuits. Drawing from a theoretical framework based on individual (conceptual change based) and socio-cultural views on learning, instruction was designed addressing known conceptual problems and attempting to create a physics (research) culture in the classroom. As the success of the lessons was limited, the focus of the study became to understand which inherent characteristics of inquiry based instruction complicate the process of constructing conceptual understanding. From the analysis of the data collected during the enactment of the lessons three tensions emerged: the tension between open inquiry and student guidance, the tension between students developing their own ideas and getting to know accepted scientific theories, and the tension between fostering scientific interest as part of a scientific research culture and the task oriented school culture. An outlook will be given on the implications for science lessons.

  9. Mathematics and Astronomy: Inquire Based Scientific Education at School

    NASA Astrophysics Data System (ADS)

    de Castro, Ana I. Gómez

    2010-10-01

    Mathematics is the language of science however, in secondary and high school education students are not made aware of the strong implications behind this statement. This is partially caused because mathematical training and the modelling of nature are not taught together. Astronomy provides firm scientific grounds for this joint training; the mathematics needed is simple, the data can be acquired with simple instrumentation in any place on the planet and the physics is rich with a broad range of levels. In addition, astronomy and space exploration are extremely appealing to young (14-17 years old) students helping to motivate them to study science doing science, i.e. to introduce Inquiry Based Scientific Education (IBSE). Since 1997 a global consortium is being developed to introduce IBSE techniques in secondary/high school education on a global scale: the Global Hands-On Universe association (www.globalhou.org) making use of the astronomical universe as a training lab. This contribution is a brief update on the current activities of the HOU consortium. Relevant URLS: www.globalhou.org, www.euhou.net, www.houspain.com.

  10. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    NASA Astrophysics Data System (ADS)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.

  12. Low dimensional worm-holes

    NASA Astrophysics Data System (ADS)

    Samardzija, Nikola

    1995-01-01

    A simple three dimensional physical model is proposed to qualitatively address a particular type of dynamics evolving on toroidal structures. In the phase space this dynamics creates appearance of a worm-hole through which a chaotic, quasiperiodic and periodic behaviors are formed. An intriguing topological property of such a system is that it possesses no steady state solutions. As such, it opens some interesting questions in the bifurcation theory. The model also offers a novel qualitative tool for explaining some recently reported experimental and simulation results observed in physics, chemistry and biology.

  13. Bridging the divide: a model-data approach to Polar and Alpine microbiology.

    PubMed

    Bradley, James A; Anesio, Alexandre M; Arndt, Sandra

    2016-03-01

    Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. © FEMS 2016.

  14. Bridging the divide: a model-data approach to Polar and Alpine microbiology

    PubMed Central

    Bradley, James A.; Anesio, Alexandre M.; Arndt, Sandra

    2016-01-01

    Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. PMID:26832206

  15. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  16. Contributions of numerical simulation data bases to the physics, modeling and measurement of turbulence

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Spalart, Philippe R.

    1987-01-01

    The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.

  17. A simple derivation for amplitude and time period of charged particles in an electrostatic bathtub potential

    NASA Astrophysics Data System (ADS)

    Prathap Reddy, K.

    2016-11-01

    An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.

  18. Inquiry-Based Pre-Engineering Activities for K-4 Students

    ERIC Educational Resources Information Center

    Perrin, Michele

    2004-01-01

    This paper uses inquiry-based learning to introduce primary students to the concepts and terminology found in four introductory engineering courses: Differential Equations, Circuit Analysis, Thermodynamics, and Dynamics. Simple electronic sensors coupled with everyday objects, such as a troll doll, demonstrate and reinforce the physical principles…

  19. A Flush Toilet Model for the Transistor

    NASA Astrophysics Data System (ADS)

    Organtini, Giovanni

    2012-04-01

    In introductory physics textbooks, diodes working principles are usually well described in a relatively simple manner. According to our experience, they are well understood by students. Even when no formal derivation of the physics laws governing the current flow through a diode is given, the use of this device as a check valve is easily accepted. This is not true for transistors. In most textbooks the behavior of a transistor is given without formal explanation. When the amplification is computed, for some reason, students have difficulties in identifying the basic physical mechanisms that give rise to such an effect. In this paper we give a simple and captivating illustration of the working principles of a transistor as an amplifier, tailored to high school students even with almost no background in electronics nor in modern physics. We assume that the target audience is familiar with the idea that a diode works as a check valve for currents. The lecture emphasis is on the illustration of physics principles governing the behavior of a transistor, rather than on a formal description of the processes leading to amplification.

  20. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  1. Is Jupiter's magnetosphere like a pulsar's or earth's?

    NASA Technical Reports Server (NTRS)

    Kennel, C. F.; Coroniti, F. V.

    1974-01-01

    The application of pulsar physics to determine the magnetic structure in the planet Jupiter outer magnetosphere is discussed. A variety of theoretical models are developed to illuminate broad areas of consistency and conflict between theory and experiment. Two possible models of Jupiter's magnetosphere, a pulsar-like radial outflow model and an earth-like convection model, are examined. A compilation of the simple order of magnitude estimates derivable from the various models is provided.

  2. Modeling the Stress Strain Behavior of Woven Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    2006-01-01

    Woven SiC fiber reinforced SiC matrix composites represent one of the most mature composite systems to date. Future components fabricated out of these woven ceramic matrix composites are expected to vary in shape, curvature, architecture, and thickness. The design of future components using woven ceramic matrix composites necessitates a modeling approach that can account for these variations which are physically controlled by local constituent contents and architecture. Research over the years supported primarily by NASA Glenn Research Center has led to the development of simple mechanistic-based models that can describe the entire stress-strain curve for composite systems fabricated with chemical vapor infiltrated matrices and melt-infiltrated matrices for a wide range of constituent content and architecture. Several examples will be presented that demonstrate the approach to modeling which incorporates a thorough understanding of the stress-dependent matrix cracking properties of the composite system.

  3. Critical length scale controls adhesive wear mechanisms

    PubMed Central

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  4. Simple Models for Nanocrystal Growth

    NASA Astrophysics Data System (ADS)

    Jensen, Pablo

    Growth of new materials with tailored properties is one of the most active research directions for physicists. As pointed out by Silvan Schweber in his brilliant analysis of the evolution of physics after World War II [1] "An important transformation has taken place in physics: As had previously happened in chemistry, an ever larger fraction of the efforts in the field were being devoted to the study of novelty rather than to the elucidation of fundamental laws and interactions […] The successes of quantum mechanics at the atomic level immediately made it clear to the more perspicacious physicists that the laws behind the phenomena had been apprehended, that they could therefore control the behavior of simple macroscopic systems and, more importantly, that they could create new structures, new objects and new phenomena […] Condensed matter physics has indeed become the study of systems that have never before existed. Phenomena such as superconductivity are genuine novelties in the universe."

  5. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  6. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  7. Contribution of ionospheric monitoring to tsunami warning: results from a benchmark exercise

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Makela, J. J.; Drob, D. P.; Occhipinti, G.; Lognonne, P. H.; Kherani, E. A.; Sladen, A.; Rakoto, V.; Grawe, M.; Meng, X.; Komjathy, A.; Liu, T. J. Y.; Astafyeva, E.; Coisson, P.; Budzien, S. A.

    2016-12-01

    Deep ocean pressure sensors have proven very effective to quantify tsunami waves in real-time. Yet, the cost of these sensors and maintenance strongly limit the extensive deployment of dense networks. Thus a complete observation of the tsunami wave-field is not possible so far. In the last decade, imprints of moderate to large transpacific tsunami wave-fields have been registered in the ionosphere through the atmospheric internal gravity wave coupled with the tsunami during its propagation. Those ionospheric observations could provide a an additional description of the phenomenon with a high spatial coverage. Ionospheric observations have been supported by numerical modeling of the ocean-atmosphere-ionosphere coupling, developed by different groups. We present here the first results of a cross-validation exercise aimed at testing various forward simulation techniques. In particular, we compare different approaches for modeling tsunami-induced gravity waves including a pseudo-spectral method, finite difference schemes, a fully coupled normal modes modeling approach, a Fourier-Laplace compressible ray-tracing solution, and a self-consistent, three-dimensional physics-based wave perturbation (WP) model based on the augmented Global Thermosphere-Ionosphere Model (WP-GITM). These models and other existing models use either a realistic sea-surface motion input model or a simple analytic model. We discuss the advantages and drawbacks of the different methods and setup common inputs to the models so that meaningful comparisons of model outputs can be made to higlight physical conclusions and understanding. Nominally, we highlight how the different models reproduce or disagree for two study cases: the ionospheric observations related to the 2012 Mw7.7 Haida Gwaii, Canada, and 2015 Mw8.3 Illapel, Chile, events. Ultimately, we explore the possibility of computing a transfer function in order to convert ionospheric perturbations directly into tsunami height estimates.

  8. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  9. Making a Fun Cartesian Diver: A Simple Project to Engage Kinaesthetic Learners

    ERIC Educational Resources Information Center

    Amir, Nazir; Subramaniam, R.

    2007-01-01

    Students in the normal technical stream are generally less academically inclined. Teaching physics to them can be a challenge. A possible way to engage such kinaesthetic learners is to encourage them to fabricate physics-based toys. The activity described in this article shows how a group of three students were able to come up with a creative…

  10. Shannon information entropy in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Ma, Yu-Gang

    2018-03-01

    The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.

  11. Scratch as a Computational Modelling Tool for Teaching Physics

    ERIC Educational Resources Information Center

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  12. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates.

    PubMed

    Egorov, Vladimir V

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  13. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    PubMed Central

    2017-01-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E. PMID:28572984

  14. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    NASA Astrophysics Data System (ADS)

    Egorov, Vladimir V.

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  15. Development Of A Data Assimilation Capability For RAPID

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  16. The distribution of density in supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Hopkins, Philip F.

    2017-11-01

    We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.

  17. Estimating fractional vegetation cover and the vegetation index of bare soil and highly dense vegetation with a physically based method

    NASA Astrophysics Data System (ADS)

    Song, Wanjuan; Mu, Xihan; Ruan, Gaiyan; Gao, Zhan; Li, Linyuan; Yan, Guangjian

    2017-06-01

    Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.

  18. Pre-Service and In-Service Physics Teachers' Ideas about Simple Electric Circuits

    ERIC Educational Resources Information Center

    Kucukozer, Huseyin; Demirci, Neset

    2008-01-01

    The aim of the study is to determine pre-service and high school physics teachers' ideas about simple electric circuits. In this study, a test containing eight questions related to simple electric circuits was given to the pre-service physics teachers (32 subjects) that had graduated from Balikesir University, Necatibey Faculty of Education, the…

  19. Kinetic Theory and Simulation of Single-Channel Water Transport

    NASA Astrophysics Data System (ADS)

    Tajkhorshid, Emad; Zhu, Fangqiang; Schulten, Klaus

    Water translocation between various compartments of a system is a fundamental process in biology of all living cells and in a wide variety of technological problems. The process is of interest in different fields of physiology, physical chemistry, and physics, and many scientists have tried to describe the process through physical models. Owing to advances in computer simulation of molecular processes at an atomic level, water transport has been studied in a variety of molecular systems ranging from biological water channels to artificial nanotubes. While simulations have successfully described various kinetic aspects of water transport, offering a simple, unified model to describe trans-channel translocation of water turned out to be a nontrivial task.

  20. Oscillations and Multiple Equilibria in Microvascular Blood Flow.

    PubMed

    Karst, Nathaniel J; Storey, Brian D; Geddes, John B

    2015-07-01

    We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.

Top