Sample records for accurate physical model

  1. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  2. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  3. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  4. A new model of physical evolution of Jupiter-family comets

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Szutowicz, S.; Wójcikowski, K.

    2014-07-01

    We aim to find the statistical physical lifetimes of Jupiter Family comets. For this purpose, we try to model the processes that govern the dynamical and physical evolution of comets. We pay special attention to physical evolution; attempts at such modelling have been made before, but we propose a more accurate model, which will include more physical effects. The model is tested on a sample of fictitious comets based on real Jupiter Family comets with some orbital elements changed to a state before the capture by Jupiter. We model four different physical effects: erosion by sublimation, dust mantling, rejuvenation (mantle blow-off), and splitting. While for sublimation and splitting there already are some models, like di Sisto et. al. (2009), and we only wish to make them more accurate, dust mantling and rejuvenation have not been included in previous, statistical physical evolution models. Each of these effects depends on one or more tunable parameters, which we establish by choosing the model that best fits the observed comet sample in a way similar to di Sisto et. al. (2009). In contrast to di Sisto et. al., our comparison also involves the observed active fractions vs. nuclear radii.

  5. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  6. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  7. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  8. Physical and numerical studies of a fracture system model

    NASA Astrophysics Data System (ADS)

    Piggott, Andrew R.; Elsworth, Derek

    1989-03-01

    Physical and numerical studies of transient flow in a model of discretely fractured rock are presented. The physical model is a thermal analogue to fractured media flow consisting of idealized disc-shaped fractures. The numerical model is used to predict the behavior of the physical model. The use of different insulating materials to encase the physical model allows the effects of differing leakage magnitudes to be examined. A procedure for determining appropriate leakage parameters is documented. These parameters are used in forward analysis to predict the thermal response of the physical model. Knowledge of the leakage parameters and of the temporal variation of boundary conditions are shown to be essential to an accurate prediction. Favorable agreement is illustrated between numerical and physical results. The physical model provides a data source for the benchmarking of alternative numerical algorithms.

  9. Accurate protein structure modeling using sparse NMR data and homologous structure information.

    PubMed

    Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David

    2012-06-19

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.

  10. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  11. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  12. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  13. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  14. Prototyping of cerebral vasculature physical models

    PubMed Central

    Khan, Imad S.; Kelly, Patrick D.; Singer, Robert J.

    2014-01-01

    Background: Prototyping of cerebral vasculature models through stereolithographic methods have the ability to accurately depict the 3D structures of complicated aneurysms with high accuracy. We describe the method to manufacture such a model and review some of its uses in the context of treatment planning, research, and surgical training. Methods: We prospectively used the data from the rotational angiography of a 40-year-old female who presented with an unruptured right paraclinoid aneurysm. The 3D virtual model was then converted to a physical life-sized model. Results: The model constructed was shown to be a very accurate depiction of the aneurysm and its associated vasculature. It was found to be useful, among other things, for surgical training and as a patient education tool. Conclusion: With improving and more widespread printing options, these models have the potential to become an important part of research and training modalities. PMID:24678427

  15. Prototyping of cerebral vasculature physical models.

    PubMed

    Khan, Imad S; Kelly, Patrick D; Singer, Robert J

    2014-01-01

    Prototyping of cerebral vasculature models through stereolithographic methods have the ability to accurately depict the 3D structures of complicated aneurysms with high accuracy. We describe the method to manufacture such a model and review some of its uses in the context of treatment planning, research, and surgical training. We prospectively used the data from the rotational angiography of a 40-year-old female who presented with an unruptured right paraclinoid aneurysm. The 3D virtual model was then converted to a physical life-sized model. The model constructed was shown to be a very accurate depiction of the aneurysm and its associated vasculature. It was found to be useful, among other things, for surgical training and as a patient education tool. With improving and more widespread printing options, these models have the potential to become an important part of research and training modalities.

  16. A physical-based gas-surface interaction model for rarefied gas flow simulation

    NASA Astrophysics Data System (ADS)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  17. Multi-Physics Computational Grains (MPCGs): Newly-Developed Accurate and Efficient Numerical Methods for Micromechanical Modeling of Multifunctional Materials and Composites

    NASA Astrophysics Data System (ADS)

    Bishay, Peter L.

    This study presents a new family of highly accurate and efficient computational methods for modeling the multi-physics of multifunctional materials and composites in the micro-scale named "Multi-Physics Computational Grains" (MPCGs). Each "mathematical grain" has a random polygonal/polyhedral geometrical shape that resembles the natural shapes of the material grains in the micro-scale where each grain is surrounded by an arbitrary number of neighboring grains. The physics that are incorporated in this study include: Linear Elasticity, Electrostatics, Magnetostatics, Piezoelectricity, Piezomagnetism and Ferroelectricity. However, the methods proposed here can be extended to include more physics (thermo-elasticity, pyroelectricity, electric conduction, heat conduction, etc.) in their formulation, different analysis types (dynamics, fracture, fatigue, etc.), nonlinearities, different defect shapes, and some of the 2D methods can also be extended to 3D formulation. We present "Multi-Region Trefftz Collocation Grains" (MTCGs) as a simple and efficient method for direct and inverse problems, "Trefftz-Lekhnitskii Computational Gains" (TLCGs) for modeling porous and composite smart materials, "Hybrid Displacement Computational Grains" (HDCGs) as a general method for modeling multifunctional materials and composites, and finally "Radial-Basis-Functions Computational Grains" (RBFCGs) for modeling functionally-graded materials, magneto-electro-elastic (MEE) materials and the switching phenomena in ferroelectric materials. The first three proposed methods are suitable for direct numerical simulation (DNS) of the micromechanics of smart composite/porous materials with non-symmetrical arrangement of voids/inclusions, and provide minimal effort in meshing and minimal time in computations, since each grain can represent the matrix of a composite and can include a pore or an inclusion. The last three methods provide stiffness matrix in their formulation and hence can be readily

  18. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  19. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  20. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  2. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  3. Physical Therapists Make Accurate and Appropriate Discharge Recommendations for Patients Who Are Acutely Ill

    PubMed Central

    Fields, Christina J.; Fernandez, Natalia

    2010-01-01

    Background Acute care physical therapists contribute to the complex process of patient discharge planning. As physical therapists are experts at evaluating functional abilities and are able to incorporate various other factors relevant to discharge planning, it was expected that physical therapists’ recommendations of patient discharge location would be both accurate and appropriate. Objective This study determined how often the therapists’ recommendations for patient discharge location and services were implemented, representing the accuracy of the recommendations. The impact of unimplemented recommendations on readmission rate was examined, reflecting the appropriateness of the recommendations. Design This retrospective study included the discharge recommendations of 40 acute care physical therapists for 762 patients in a large academic medical center. The frequency of mismatch between the physical therapist's recommendation and the patient's actual discharge location and services was calculated. The mismatch variable had 3 levels: match, mismatch with services lacking, or mismatch with different services. Regression analysis was used to test whether mismatch status, patient age, length of admission, or discharge location predicted patient readmittance. Results Overall, physical therapists’ discharge recommendations were implemented 83% of the time. Patients were 2.9 times more likely to be readmitted when the therapist's discharge recommendation was not implemented and recommended follow-up services were lacking (mismatch with services lacking) compared with patients with a match. Limitations This study was limited to one facility. Limited information about the patients was collected, and data on patient readmission to other facilities were not collected. Conclusions This study supports the role of physical therapists in discharge planning in the acute care setting. Physical therapists demonstrated the ability to make accurate and appropriate discharge

  4. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  5. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  6. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  7. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  8. A new algebraic turbulence model for accurate description of airfoil flows

    NASA Astrophysics Data System (ADS)

    Xiao, Meng-Juan; She, Zhen-Su

    2017-11-01

    We report a new algebraic turbulence model (SED-SL) based on the SED theory, a symmetry-based approach to quantifying wall turbulence. The model specifies a multi-layer profile of a stress length (SL) function in both the streamwise and wall-normal directions, which thus define the eddy viscosity in the RANS equation (e.g. a zero-equation model). After a successful simulation of flat plate flow (APS meeting, 2016), we report here further applications of the model to the flow around airfoil, with significant improvement of the prediction accuracy of the lift (CL) and drag (CD) coefficients compared to other popular models (e.g. BL, SA, etc.). Two airfoils, namely RAE2822 airfoil and NACA0012 airfoil, are computed for over 50 cases. The results are compared to experimental data from AGARD report, which shows deviations of CL bounded within 2%, and CD within 2 counts (10-4) for RAE2822 and 6 counts for NACA0012 respectively (under a systematic adjustment of the flow conditions). In all these calculations, only one parameter (proportional to the Karmen constant) shows slight variation with Mach number. The most remarkable outcome is, for the first time, the accurate prediction of the drag coefficient. The other interesting outcome is the physical interpretation of the multi-layer parameters: they specify the corresponding multi-layer structure of turbulent boundary layer; when used together with simulation data, the SED-SL enables one to extract physical information from empirical data, and to understand the variation of the turbulent boundary layer.

  9. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the

  10. Modelling the physics in iterative reconstruction for transmission computed tomography

    PubMed Central

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  11. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  12. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  13. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  14. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  15. High-Accurate, Physics-Based Wake Simulation Techniques

    DTIC Science & Technology

    2015-01-27

    to accepting the use of computational fluid dynamics models to supplement some of the research. The scientists Lewellen and Lewellen [13] in 1996...resolved in today’s climate es- pecially concerning CFD and experimental. Multiple programs have been established such as the Aircraft Vortex Spacing ...step the entire matrix is solved at once creating inconsistencies when applied to the physics of a fluid mechanics problem where information changes

  16. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    DOE PAGES

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-20

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less

  17. Cabin Environment Physics Risk Model

    NASA Technical Reports Server (NTRS)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  18. Coarse-grained, foldable, physical model of the polypeptide chain.

    PubMed

    Chakraborty, Promita; Zuckermann, Ronald N

    2013-08-13

    Although nonflexible, scaled molecular models like Pauling-Corey's and its descendants have made significant contributions in structural biology research and pedagogy, recent technical advances in 3D printing and electronics make it possible to go one step further in designing physical models of biomacromolecules: to make them conformationally dynamic. We report here the design, construction, and validation of a flexible, scaled, physical model of the polypeptide chain, which accurately reproduces the bond rotational degrees of freedom in the peptide backbone. The coarse-grained backbone model consists of repeating amide and α-carbon units, connected by mechanical bonds (corresponding to ϕ and ψ) that include realistic barriers to rotation that closely approximate those found at the molecular scale. Longer-range hydrogen-bonding interactions are also incorporated, allowing the chain to readily fold into stable secondary structures. The model is easily constructed with readily obtainable parts and promises to be a tremendous educational aid to the intuitive understanding of chain folding as the basis for macromolecular structure. Furthermore, this physical model can serve as the basis for linking tangible biomacromolecular models directly to the vast array of existing computational tools to provide an enhanced and interactive human-computer interface.

  19. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  20. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    NASA Astrophysics Data System (ADS)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  1. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  2. 3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  3. 3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  4. 3ARM: A Fast, Accurate Radiative Transfer Model For Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  5. Production of accurate skeletal models of domestic animals using three-dimensional scanning and printing technology.

    PubMed

    Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling

    2018-01-01

    Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the fifth rib, and the sixth cervical (C6) vertebra were used to produce digital models. These were then used to produce 1:1 scale physical models with the FDM printer. The anatomical features of the digital models and three-dimensional (3D) printed models were then compared with those of the original skeletal specimens. The results of this study demonstrated that both digital and physical scale models of animal skeletal components could be rapidly produced using 3D printing technology. In terms of accuracy between models and original specimens, the standard deviations of the femur and the fifth rib measurements were 0.0351 and 0.0572, respectively. All of the features except the nutrient foramina on the original bone specimens could be identified in the digital and 3D printed models. Moreover, the 3D printed models could serve as a viable alternative to original bone specimens when used in anatomy education, as determined from student surveys. This study demonstrated an important example of reproducing bone models to be used in anatomy education and veterinary clinical training. Anat Sci Educ 11: 73-80. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  6. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  7. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  8. How to Make Our Models More Physically-based

    NASA Astrophysics Data System (ADS)

    Savenije, H. H. G.

    2016-12-01

    not incorporate these patterns are not physical. The parameters in the equations may be adjusted to compensate for the lake of patterns, but this involves scale-dependent calibration. In contrast to what is widely believed, relatively simple conceptual models can accommodate these physical processes accurately and very efficiently.

  9. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  10. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Mali, V. K.; Kuiry, S. N.

    2015-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical

  11. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.

    2014-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical

  12. Low order physical models of vertical axis wind turbines

    NASA Astrophysics Data System (ADS)

    Craig, Anna; Dabiri, John; Koseff, Jeffrey

    2016-11-01

    In order to examine the ability of low-order physical models of vertical axis wind turbines to accurately reproduce key flow characteristics, experiments were conducted on rotating turbine models, rotating solid cylinders, and stationary porous flat plates (of both uniform and non-uniform porosities). From examination of the patterns of mean flow, the wake turbulence spectra, and several quantitative metrics, it was concluded that the rotating cylinders represent a reasonably accurate analog for the rotating turbines. In contrast, from examination of the patterns of mean flow, it was found that the porous flat plates represent only a limited analog for rotating turbines (for the parameters examined). These findings have implications for both laboratory experiments and numerical simulations, which have previously used analogous low order models in order to reduce experimental/computational costs. NSF GRF and SGF to A.C; ONR N000141211047 and the Gordon and Betty Moore Foundation Grant GBMF2645 to J.D.; and the Bob and Norma Street Environmental Fluid Mechanics Laboratory at Stanford University.

  13. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  14. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  15. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  16. Physically-Based Reduced Order Modelling of a Uni-Axial Polysilicon MEMS Accelerometer

    PubMed Central

    Ghisi, Aldo; Mariani, Stefano; Corigliano, Alberto; Zerbini, Sarah

    2012-01-01

    In this paper, the mechanical response of a commercial off-the-shelf, uni-axial polysilicon MEMS accelerometer subject to drops is numerically investigated. To speed up the calculations, a simplified physically-based (beams and plate), two degrees of freedom model of the movable parts of the sensor is adopted. The capability and the accuracy of the model are assessed against three-dimensional finite element simulations, and against outcomes of experiments on instrumented samples. It is shown that the reduced order model provides accurate outcomes as for the system dynamics. To also get rather accurate results in terms of stress fields within regions that are prone to fail upon high-g shocks, a correction factor is proposed by accounting for the local stress amplification induced by re-entrant corners. PMID:23202031

  17. Dynamic inverse models in human-cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Robinson, Ryan M.; Scobee, Dexter R. R.; Burden, Samuel A.; Sastry, S. Shankar

    2016-05-01

    Human interaction with the physical world is increasingly mediated by automation. This interaction is characterized by dynamic coupling between robotic (i.e. cyber) and neuromechanical (i.e. human) decision-making agents. Guaranteeing performance of such human-cyber-physical systems will require predictive mathematical models of this dynamic coupling. Toward this end, we propose a rapprochement between robotics and neuromechanics premised on the existence of internal forward and inverse models in the human agent. We hypothesize that, in tele-robotic applications of interest, a human operator learns to invert automation dynamics, directly translating from desired task to required control input. By formulating the model inversion problem in the context of a tracking task for a nonlinear control system in control-a_ne form, we derive criteria for exponential tracking and show that the resulting dynamic inverse model generally renders a portion of the physical system state (i.e., the internal dynamics) unobservable from the human operator's perspective. Under stability conditions, we show that the human can achieve exponential tracking without formulating an estimate of the system's state so long as they possess an accurate model of the system's dynamics. These theoretical results are illustrated using a planar quadrotor example. We then demonstrate that the automation can intervene to improve performance of the tracking task by solving an optimal control problem. Performance is guaranteed to improve under the assumption that the human learns and inverts the dynamic model of the altered system. We conclude with a discussion of practical limitations that may hinder exact dynamic model inversion.

  18. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    DTIC Science & Technology

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  19. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M.J.; Finkenthal, M.; Soukhanovskii, V.

    In present magnetically confined fusion devices, high and intermediate {ital Z} impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high {ital Z} plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introducedmore » through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3{endash}1600 {Angstrom}), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) {copyright} {ital 1999 American Institute of Physics.}« less

  20. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  1. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  2. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  3. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  4. Physical modelling in biomechanics.

    PubMed Central

    Koehl, M A R

    2003-01-01

    Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350

  5. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  6. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  7. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  8. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  10. Accurate RNA 5-methylcytosine site prediction based on heuristic physical-chemical properties reduction and classifier ensemble.

    PubMed

    Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun

    2018-06-01

    RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  12. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    NASA Astrophysics Data System (ADS)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  13. Accurate modeling of the hose instability in plasma wakefield accelerators

    DOE PAGES

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; ...

    2018-05-20

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. Lastly, it paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  14. Accurate modeling of the hose instability in plasma wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; Martinez de la Ossa, A.; Osterhoff, J.; Esarey, E.; Leemans, W. P.

    2018-05-01

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. It paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  15. Accurate modeling of the hose instability in plasma wakefield accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. Lastly, it paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  16. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  17. A new accurate quadratic equation model for isothermal gas chromatography and its comparison with the linear model

    PubMed Central

    Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.

    2013-01-01

    The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489

  18. Implementing a modeling software for animated protein-complex interactions using a physics simulation library.

    PubMed

    Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko

    2014-12-01

    To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.

  19. Research on the equivalence between digital core and rock physics models

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Zheng, Ying; Zong, Zhaoyun

    2017-06-01

    In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.

  20. Development of physical and mathematical models for the Porous Ceramic Tube Plant Nutrification System (PCTPNS)

    NASA Technical Reports Server (NTRS)

    Tsao, D. Teh-Wei; Okos, M. R.; Sager, J. C.; Dreschel, T. W.

    1992-01-01

    A physical model of the Porous Ceramic Tube Plant Nutrification System (PCTPNS) was developed through microscopic observations of the tube surface under various operational conditions. In addition, a mathematical model of this system was developed which incorporated the effects of the applied suction pressure, surface tension, and gravitational forces as well as the porosity and physical dimensions of the tubes. The flow of liquid through the PCTPNS was thus characterized for non-biological situations. One of the key factors in the verification of these models is the accurate and rapid measurement of the 'wetness' or holding capacity of the ceramic tubes. This study evaluated a thermistor based moisture sensor device and recommendations for future research on alternative sensing devices are proposed. In addition, extensions of the physical and mathematical models to include the effects of plant physiology and growth are also discussed for future research.

  1. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    NASA Astrophysics Data System (ADS)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  2. Standard Model of Particle Physics--a health physics perspective.

    PubMed

    Bevelacqua, J J

    2010-11-01

    The Standard Model of Particle Physics is reviewed with an emphasis on its relationship to the physics supporting the health physics profession. Concepts important to health physics are emphasized and specific applications are presented. The capability of the Standard Model to provide health physics relevant information is illustrated with application of conservation laws to neutron and muon decay and in the calculation of the neutron mean lifetime.

  3. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  4. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  5. Physical modeling in geomorphology: are boundary conditions necessary?

    NASA Astrophysics Data System (ADS)

    Cantelli, A.

    2012-12-01

    Referring to the physical experimental design in geomorphology, boundary conditions are key elements that determine the quality of the results and therefore the study development. For years engineers have modeled structures, such as dams and bridges, with high precision and excellent results. Until the last decade, a great part of the physical experimental work in geomorphology has been developed with an engineer-like approach, requiring an accurate scaling analysis to determine inflow parameters and initial geometrical conditions. However, during the last decade, the way we have been approaching physical experiments has significantly changed. In particular, boundary conditions and initial conditions are considered unknown factors that need to be discovered during the experiment. This new philosophy leads to a more demanding data acquisition process but relaxes the obligation to a priori know the appropriate input and initial conditions and provides the flexibility to discover those data. Here I am going to present some practical examples of this experimental approach in deepwater geomorphology; some questions about scaling of turbidity currents and a new large experimental facility built at the Universidade Federal do Rio Grande do Sul, Brasil.

  6. Accurate, low-cost 3D-models of gullies

    NASA Astrophysics Data System (ADS)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  7. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    NASA Astrophysics Data System (ADS)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  8. An Accurate Absorption-Based Net Primary Production Model for the Global Ocean

    NASA Astrophysics Data System (ADS)

    Silsbe, G.; Westberry, T. K.; Behrenfeld, M. J.; Halsey, K.; Milligan, A.

    2016-02-01

    As a vital living link in the global carbon cycle, understanding how net primary production (NPP) varies through space, time, and across climatic oscillations (e.g. ENSO) is a key objective in oceanographic research. The continual improvement of ocean observing satellites and data analytics now present greater opportunities for advanced understanding and characterization of the factors regulating NPP. In particular, the emergence of spectral inversion algorithms now permits accurate retrievals of the phytoplankton absorption coefficient (aΦ) from space. As NPP is the efficiency in which absorbed energy is converted into carbon biomass, aΦ measurements circumvents chlorophyll-based empirical approaches by permitting direct and accurate measurements of phytoplankton energy absorption. It has long been recognized, and perhaps underappreciated, that NPP and phytoplankton growth rates display muted variability when normalized to aΦ rather than chlorophyll. Here we present a novel absorption-based NPP model that parameterizes the underlying physiological mechanisms behind this muted variability, and apply this physiological model to the global ocean. Through a comparison against field data from the Hawaii and Bermuda Ocean Time Series, we demonstrate how this approach yields more accurate NPP measurements than other published NPP models. By normalizing NPP to satellite estimates of phytoplankton carbon biomass, this presentation also explores the seasonality of phytoplankton growth rates across several oceanic regions. Finally, we discuss how future advances in remote-sensing (e.g. hyperspectral satellites, LIDAR, autonomous profilers) can be exploited to further improve absorption-based NPP models.

  9. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  10. Accurate Heart Rate Monitoring During Physical Exercises Using PPG.

    PubMed

    Temko, Andriy

    2017-09-01

    The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.

  11. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  12. Physical models have gender-specific effects on student understanding of protein structure-function relationships.

    PubMed

    Forbes-Lorman, Robin M; Harris, Michelle A; Chang, Wesley S; Dent, Erik W; Nordheim, Erik V; Franzen, Margaret A

    2016-07-08

    Understanding how basic structural units influence function is identified as a foundational/core concept for undergraduate biological and biochemical literacy. It is essential for students to understand this concept at all size scales, but it is often more difficult for students to understand structure-function relationships at the molecular level, which they cannot as effectively visualize. Students need to develop accurate, 3-dimensional mental models of biomolecules to understand how biomolecular structure affects cellular functions at the molecular level, yet most traditional curricular tools such as textbooks include only 2-dimensional representations. We used a controlled, backward design approach to investigate how hand-held physical molecular model use affected students' ability to logically predict structure-function relationships. Brief (one class period) physical model use increased quiz score for females, whereas there was no significant increase in score for males using physical models. Females also self-reported higher learning gains in their understanding of context-specific protein function. Gender differences in spatial visualization may explain the gender-specific benefits of physical model use observed. © 2016 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology, 44(4):326-335, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  13. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    NASA Astrophysics Data System (ADS)

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2013-07-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.

  14. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  15. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    NASA Astrophysics Data System (ADS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  16. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  17. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and

  18. Sensitivities of the hydrologic cycle to model physics, grid resolution, and ocean type in the aquaplanet Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Benedict, James J.; Medeiros, Brian; Clement, Amy C.; Pendergrass, Angeline G.

    2017-06-01

    to accurately represent how often it precipitates and at what intensity. Model precipitation errors are closely tied to imperfect representations of physical processes too small to be resolved on the model grid. The problem is compounded by the complexity of contemporary climate models and the many model configuration options available. In this study, we use an aquaplanet, a simplified global climate model entirely devoid of land masses, to explore the response of precipitation to several aspects of model configuration in a present-day climate state. Our results suggest that critical precipitation patterns, including extreme precipitation events that have large socio-economic impacts, are strongly sensitive to horizontal grid resolution and the representation of unresolved physical processes. Identification and understanding of such model configuration-related precipitation responses in the present-day climate will provide a more accurate estimate of model uncertainty necessary for an improved interpretation of precipitation changes in global warming projections.

  19. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  20. Linearized Flux Evolution (LiFE): A technique for rapidly adapting fluxes from full-physics radiative transfer models

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Crisp, David

    2018-05-01

    Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.

  1. Accurate modeling and evaluation of microstructures in complex materials

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  2. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  3. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Waters, Jiajia

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  5. Surrogate screening models for the low physical activity criterion of frailty.

    PubMed

    Eckel, Sandrah P; Bandeen-Roche, Karen; Chaves, Paulo H M; Fried, Linda P; Louis, Thomas A

    2011-06-01

    Low physical activity, one of five criteria in a validated clinical phenotype of frailty, is assessed by a standardized, semiquantitative questionnaire on up to 20 leisure time activities. Because of the time demanded to collect the interview data, it has been challenging to translate to studies other than the Cardiovascular Health Study (CHS), for which it was developed. Considering subsets of activities, we identified and evaluated streamlined surrogate assessment methods and compared them to one implemented in the Women's Health and Aging Study (WHAS). Using data on men and women ages 65 and older from the CHS, we applied logistic regression models to rank activities by "relative influence" in predicting low physical activity.We considered subsets of the most influential activities as inputs to potential surrogate models (logistic regressions). We evaluated predictive accuracy and predictive validity using the area under receiver operating characteristic curves and assessed criterion validity using proportional hazards models relating frailty status (defined using the surrogate) to mortality. Walking for exercise and moderately strenuous household chores were highly influential for both genders. Women required fewer activities than men for accurate classification. The WHAS model (8 CHS activities) was an effective surrogate, but a surrogate using 6 activities (walking, chores, gardening, general exercise, mowing and golfing) was also highly predictive. We recommend a 6 activity questionnaire to assess physical activity for men and women. If efficiency is essential and the study involves only women, fewer activities can be included.

  6. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    PubMed

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  7. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware

    PubMed Central

    Municio, Esteban; Van de Velde, Bruno; Latré, Steven

    2018-01-01

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks. PMID:29393900

  8. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  9. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  10. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  11. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This

  12. A multi-physics model for ultrasonically activated soft tissue.

    PubMed

    Suvranu De, Rahul

    2017-02-01

    A multi-physics model has been developed to investigate the effects of cellular level mechanisms on the thermomechanical response of ultrasonically activated soft tissue. Cellular level cavitation effects have been incorporated in the tissue level continuum model to accurately determine the thermodynamic states such as temperature and pressure. A viscoelastic material model is assumed for the macromechanical response of the tissue. The cavitation model based equation-of-state provides the additional pressure arising from evaporation of intracellular and cellular water by absorbing heat due to structural and viscoelastic heating in the tissue, and temperature to the continuum level thermomechanical model. The thermomechanical response of soft tissue is studied for the operational range of frequencies of oscillations and applied loads for typical ultrasonically activated surgical instruments. The model is shown to capture characteristics of ultrasonically activated soft tissue deformation and temperature evolution. At the cellular level, evaporation of water below the boiling temperature under ambient conditions is indicative of protein denaturation around the temperature threshold for coagulation of tissues. Further, with increasing operating frequency (or loading), the temperature rises faster leading to rapid evaporation of tissue cavity water, which may lead to accelerated protein denaturation and coagulation.

  13. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  14. Importance of Nuclear Physics to NASA's Space Missions

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.; Cucinotta, F. A.

    2001-01-01

    We show that nuclear physics is extremely important for accurate risk assessments for space missions. Due to paucity of experimental input radiation interaction information it is imperative to develop reliable accurate models for the interaction of radiation with matter. State-of-the-art nuclear cross sections models have been developed at the NASA Langley Research center and are discussed.

  15. Pre-Service Physics Teachers' Argumentation in a Model Rocketry Physics Experience

    ERIC Educational Resources Information Center

    Gürel, Cem; Süzük, Erol

    2017-01-01

    This study investigates the quality of argumentation developed by a group of pre-service physics teachers' (PSPT) as an indicator of subject matter knowledge on model rocketry physics. The structure of arguments and scientific credibility model was used as a design framework in the study. The inquiry of model rocketry physics was employed in…

  16. Physically based DC lifetime model for lead zirconate titanate films

    NASA Astrophysics Data System (ADS)

    Garten, Lauren M.; Hagiwara, Manabu; Ko, Song Won; Trolier-McKinstry, Susan

    2017-09-01

    Accurate lifetime predictions for Pb(Zr0.52Ti0.48)O3 thin films are critical for a number of applications, but current reliability models are not consistent with the resistance degradation mechanisms in lead zirconate titanate. In this work, the reliability and lifetime of chemical solution deposited (CSD) and sputtered Pb(Zr0.52Ti0.48)O3 thin films are characterized using highly accelerated lifetime testing (HALT) and leakage current-voltage (I-V) measurements. Temperature dependent HALT results and impedance spectroscopy show activation energies of approximately 1.2 eV for the CSD films and 0.6 eV for the sputtered films. The voltage dependent HALT results are consistent with previous reports, but do not clearly indicate what causes device failure. To understand more about the underlying physical mechanisms leading to degradation, the I-V data are fit to known conduction mechanisms, with Schottky emission having the best-fit and realistic extracted material parameters. Using the Schottky emission equation as a base, a unique model is developed to predict the lifetime under highly accelerated testing conditions based on the physical mechanisms of degradation.

  17. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  18. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy

  19. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  20. Neurons compute internal models of the physical laws of motion.

    PubMed

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  1. Accurate and scalable social recommendation using mixed-membership stochastic block models.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta

    2016-12-13

    With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.

  2. Accurate and scalable social recommendation using mixed-membership stochastic block models

    PubMed Central

    Godoy-Lorite, Antonia; Moore, Cristopher

    2016-01-01

    With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773

  3. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors.

    PubMed

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-08-31

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.

  4. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors

    PubMed Central

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-01-01

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287

  5. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  6. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.

  7. Physically Accurate Soil Freeze-Thaw Processes in a Global Land Surface Scheme

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Haverd, Vanessa

    2018-01-01

    The model Soil-Litter-Iso (SLI) calculates coupled heat and water transport in soil. It was recently implemented into the Australian land surface model CABLE, which is the land component of the Australian Community Climate and Earth System Simulator (ACCESS). Here we extended SLI to include accurate freeze-thaw processes in the soil and snow. SLI provides thence an implicit solution of the energy and water balances of soil and snow as a standalone model and within CABLE. The enhanced SLI was tested extensively against theoretical formulations, laboratory experiments, field data, and satellite retrievals. The model performed well for all experiments at wide-ranging temporal and spatial scales. SLI melts snow faster at the end of the cold season compared to observations though because there is no subgrid variability within SLI given by the implicit, coupled solution of energy and water. Combined CABLE-SLI shows very realistic dynamics and extent of permafrost on the Northern hemisphere. It illustrated, however, also the limits of possible comparisons between large-scale land surface models and local permafrost observations. CABLE-SLI exhibits the same patterns of snow depth and snow water equivalent on the Northern hemisphere compared to satellite-derived observations but quantitative comparisons depend largely on the given meteorological input fields. Further extension of CABLE-SLI with depth-dependence of soil carbon will allow realistic projections of the development of permafrost and frozen carbon stocks in a changing climate.

  8. Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation

    ERIC Educational Resources Information Center

    Schroeder, Sascha; Richter, Tobias; Hoever, Inga

    2008-01-01

    Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…

  9. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  10. Accurate Treatment of Collision and Water-Delivery in Models of Terrestrial Planet Formation

    NASA Astrophysics Data System (ADS)

    Haghighipour, N.; Maindl, T. I.; Schaefer, C. M.; Wandel, O.

    2017-08-01

    We have developed a comprehensive approach in simulating collisions and growth of embryos to terrestrial planets where we use a combination of SPH and N-body codes to model collisions and the transfer of water and chemical compounds accurately.

  11. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  12. Modeling of capacitor charging dynamics in an energy harvesting system considering accurate electromechanical coupling effects

    NASA Astrophysics Data System (ADS)

    Bagheri, Shahriar; Wu, Nan; Filizadeh, Shaahin

    2018-06-01

    This paper presents an iterative numerical method that accurately models an energy harvesting system charging a capacitor with piezoelectric patches. The constitutive relations of piezoelectric materials connected with an external charging circuit with a diode bridge and capacitors lead to the electromechanical coupling effect and the difficulty of deriving accurate transient mechanical response, as well as the charging progress. The proposed model is built upon the Euler-Bernoulli beam theory and takes into account the electromechanical coupling effects as well as the dynamic process of charging an external storage capacitor. The model is validated through experimental tests on a cantilever beam coated with piezoelectric patches. Several parametric studies are performed and the functionality of the model is verified. The efficiency of power harvesting system can be predicted and tuned considering variations in different design parameters. Such a model can be utilized to design robust and optimal energy harvesting system.

  13. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  14. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  15. Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1997-01-01

    The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various

  16. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  17. Models in Physics, Models for Physics Learning, and Why the Distinction May Matter in the Case of Electric Circuits

    ERIC Educational Resources Information Center

    Hart, Christina

    2008-01-01

    Models are important both in the development of physics itself and in teaching physics. Historically, the consensus models of physics have come to embody particular ontological assumptions and epistemological commitments. Educators have generally assumed that the consensus models of physics, which have stood the test of time, will also work well…

  18. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models

    PubMed Central

    2015-01-01

    Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404

  19. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  20. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF.more » We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.« less

  1. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  2. Ion Yields in the Coupled Chemical and Physical Dynamics Model of Matrix-Assisted Laser Desorption/Ionization

    NASA Astrophysics Data System (ADS)

    Knochenmuss, Richard

    2015-08-01

    The Coupled Chemical and Physical Dynamics (CPCD) model of matrix assisted laser desorption ionization has been restricted to relative rather than absolute yield comparisons because the rate constant for one step in the model was not accurately known. Recent measurements are used to constrain this constant, leading to good agreement with experimental yield versus fluence data for 2,5-dihydroxybenzoic acid. Parameters for alpha-cyano-4-hydroxycinnamic acid are also estimated, including contributions from a possible triplet state. The results are compared with the polar fluid model, the CPCD is found to give better agreement with the data.

  3. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  4. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  5. COMPUTATIONAL CHALLENGES IN BUILDING MULTI-SCALE AND MULTI-PHYSICS MODELS OF CARDIAC ELECTRO-MECHANICS

    PubMed Central

    Plank, G; Prassl, AJ; Augustin, C

    2014-01-01

    Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050

  6. Accurate modeling of defects in graphene transport calculations

    NASA Astrophysics Data System (ADS)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  7. Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea

    NASA Astrophysics Data System (ADS)

    Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.

    2016-02-01

    The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.

  8. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  9. ACCURATE LOW-MASS STELLAR MODELS OF KOI-126

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron, E-mail: gregory.a.feiden@dartmouth.edu

    2011-10-10

    The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influencemore » of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.« less

  10. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    NASA Astrophysics Data System (ADS)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    Hypervelocity impacts frequently generate an observable "flash" of light with two components: a short-duration spike due to emissions from vaporized material, and a long-duration peak due to thermal emissions from expanding hot debris. The intensity and duration of these peaks depend on the impact velocity, angle, and the target and projectile mass and composition. Thus remote sensing measurements of planetary impact flashes have the potential to constrain the properties of impacting meteors and improve our understanding of impact flux and cratering processes. Interpreting impact flash measurements requires a thorough understanding of how flash characteristics correlate with impact conditions. Because planetary-scale impacts cannot be replicated in the laboratory, numerical simulations are needed to provide this insight for the solar system. Computational hydrocodes can produce detailed simulations of the impact process, but they lack the radiation physics required to model the optical flash. The Johns Hopkins University Applied Physics Laboratory (APL) developed a model to calculate the optical signature from the hot debris cloud produced by an impact. While the phenomenology of the optical signature is understood, the details required to accurately model it are complicated by uncertainties in material and optical properties and the simplifications required to numerically model radiation from large-scale impacts. Comparisons with laboratory impact experiments allow us to validate our approach and to draw insight regarding processes that occur at all scales in impact events, such as melt generation. We used Sandia National Lab's CTH shock physics hydrocode along with the optical signature model developed at APL to compare with a series of laboratory experiments conducted at the NASA Ames Vertical Gun Range. The experiments used Pyrex projectiles to impact pumice powder targets with velocities ranging from 1 to 6 km/s at angles of 30 and 90 degrees with respect to

  11. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  12. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    PubMed

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  13. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  14. A Multivariate Model of Physics Problem Solving

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Farley, John

    2013-01-01

    A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…

  15. Development of an Anatomically Accurate Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies

    DTIC Science & Technology

    2017-02-01

    ARL-TR-7945 ● FEB 2017 US Army Research Laboratory Development of an Anatomically Accurate Finite Element Human Ocular Globe...ARL-TR-7945 ● FEB 2017 US Army Research Laboratory Development of an Anatomically Accurate Finite Element Human Ocular Globe Model... Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  16. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  17. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  18. Geodetic analysis of disputed accurate qibla direction

    NASA Astrophysics Data System (ADS)

    Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah

    2018-04-01

    Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.

  19. Measurement of Pressure Responses in a Physical Model of a Human Head with High Shape Fidelity Based on Ct/mri Data

    NASA Astrophysics Data System (ADS)

    Miyazaki, Yusuke; Tachiya, Hiroshi; Anata, Kenji; Hojo, Akihiro

    This study discusses a head injury mechanism in case of a human head subjected to impact, from results of impact experiments by using a physical model of a human head with high-shape fidelity. The physical model was constructed by using rapid prototyping technology from the three-dimensional CAD data, which obtained from CT/MRI images of a subject's head. As results of the experiments, positive pressure responses occurred at the impacted site, whereas negative pressure responses occurred at opposite the impacted site. Moreover, the absolute maximum value of pressure occurring at the frontal region of the intracranial space of the head model resulted in same or higher than that at the occipital site in each case that the impact force was imposed on frontal or occipital region. This result has not been showed in other study using simple shape physical models. And, the result corresponds with clinical evidences that brain contusion mainly occurs at the frontal part in each impact direction. Thus, physical model with accurate skull shape is needed to clarify the mechanism of brain contusion.

  20. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciambur, B. C., E-mail: bciambur@swin.edu.au

    2015-09-10

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial,more » cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.« less

  1. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25

  2. Production of Accurate Skeletal Models of Domestic Animals Using Three-Dimensional Scanning and Printing Technology

    ERIC Educational Resources Information Center

    Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling

    2018-01-01

    Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the…

  3. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  4. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  5. Physical modelling of the rainfall infiltration processes and related landslide behaviour.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Damiano, Emilia; Olivares, Lucio; Spolverino, Gennaro; Versace, Pasquale

    2016-04-01

    The prediction of natural processes, such as weather-induced landslide, an issue that is of great importance. Were held numerous research to understand the processes underlying the triggering of a landslide, and to improve the forecasting systems. A valid prediction model can allow the implementation of an equally valid announcement and warning system, thus reducing the risk caused by such phenomena. The hydraulic and hydrologic modeling of the process that takes place in an unstable slope subjected to rainfall, can be performed using two approaches: through mathematical models or physical models. Our research uses an integrated approach, making system data of experimental sites, with both the results and interpretations of physical models, both with simulations of mathematical models. The intent is to observe and interpret laboratory experiments to reproduce and simulate the phenomenon with mathematical models. The research aims to obtain interpretations of hydrological and hydraulic processes, which occur in the slopes as a result of rain, more and more accurate. For our research we use a scaled-down physical model and a mathematical model FEM. The physical model is a channel with transparent walls composed of two floors at a variable angle (ignition and propagation) 1 meter wide and 3 meters long each. The model is instrumented with sensors that control the hydraulic and geotechnical parameters within the slopes and devices that simulate natural events. The model is equipped with a monitoring system able to keep under observation the physical quantities of interest. In particular, the apparatus is equipped with tensiometers miniaturized, that can be installed in different positions and at different depths, for the measurement of suction within the slope, miniaturized pressure transducers on the bottom of the channel for the measurement of any pressure neutral positive , TDR system for the measurement of the volumetric water content, and displacement transducers

  6. Focus group discussion in mathematical physics learning

    NASA Astrophysics Data System (ADS)

    Ellianawati; Rudiana, D.; Sabandar, J.; Subali, B.

    2018-03-01

    The Focus Group Discussion (FGD) activity in Mathematical Physics learning has helped students perform the stages of problem solving reflectively. The FGD implementation was conducted to explore the problems and find the right strategy to improve the students' ability to solve the problem accurately which is one of reflective thinking component that has been difficult to improve. The research method used is descriptive qualitative by using single subject response in Physics student. During the FGD process, one student was observed of her reflective thinking development in solving the physics problem. The strategy chosen in the discussion activity was the Cognitive Apprenticeship-Instruction (CA-I) syntax. Based on the results of this study, it is obtained the information that after going through a series of stages of discussion, the students' reflective thinking skills is increased significantly. The scaffolding stage in the CA-I model plays an important role in the process of solving physics problems accurately. Students are able to recognize and formulate problems by describing problem sketches, identifying the variables involved, applying mathematical equations that accord to physics concepts, executing accurately, and applying evaluation by explaining the solution to various contexts.

  7. Accurate Induction Energies for Small Organic Molecules. 2. Development and Testing of Distributed Polarizability Models against SAPT(DFT) Energies.

    PubMed

    Misquitta, Alston J; Stone, Anthony J; Price, Sarah L

    2008-01-01

    In part 1 of this two-part investigation we set out the theoretical basis for constructing accurate models of the induction energy of clusters of moderately sized organic molecules. In this paper we use these techniques to develop a variety of accurate distributed polarizability models for a set of representative molecules that include formamide, N-methyl propanamide, benzene, and 3-azabicyclo[3.3.1]nonane-2,4-dione. We have also explored damping, penetration, and basis set effects. In particular, we have provided a way to treat the damping of the induction expansion. Different approximations to the induction energy are evaluated against accurate SAPT(DFT) energies, and we demonstrate the accuracy of our induction models on the formamide-water dimer.

  8. Constitutive Modeling and Testing of Polymer Matrix Composites Incorporating Physical Aging at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Veazie, David R.

    1998-01-01

    Advanced polymer matrix composites (PMC's) are desirable for structural materials in diverse applications such as aircraft, civil infrastructure and biomedical implants because of their improved strength-to-weight and stiffness-to-weight ratios. For example, the next generation military and commercial aircraft requires applications for high strength, low weight structural components subjected to elevated temperatures. A possible disadvantage of polymer-based composites is that the physical and mechanical properties of the matrix often change significantly over time due to the exposure of elevated temperatures and environmental factors. For design, long term exposure (i.e. aging) of PMC's must be accounted for through constitutive models in order to accurately assess the effects of aging on performance, crack initiation and remaining life. One particular aspect of this aging process, physical aging, is considered in this research.

  9. Towards a physics-based multiscale modelling of the electro-mechanical coupling in electro-active polymers.

    PubMed

    Cohen, Noy; Menzel, Andreas; deBotton, Gal

    2016-02-01

    Owing to the increasing number of industrial applications of electro-active polymers (EAPs), there is a growing need for electromechanical models which accurately capture their behaviour. To this end, we compare the predicted behaviour of EAPs undergoing homogeneous deformations according to three electromechanical models. The first model is a phenomenological continuum-based model composed of the mechanical Gent model and a linear relationship between the electric field and the polarization. The electrical and the mechanical responses according to the second model are based on the physical structure of the polymer chain network. The third model incorporates a neo-Hookean mechanical response and a physically motivated microstructurally based long-chains model for the electrical behaviour. In the microstructural-motivated models, the integration from the microscopic to the macroscopic levels is accomplished by the micro-sphere technique. Four types of homogeneous boundary conditions are considered and the behaviours determined according to the three models are compared. For the microstructurally motivated models, these analyses are performed and compared with the widely used phenomenological model for the first time. Some of the aspects revealed in this investigation, such as the dependence of the intensity of the polarization field on the deformation, highlight the need for an in-depth investigation of the relationships between the structure and the behaviours of the EAPs at the microscopic level and their overall macroscopic response.

  10. Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions

    NASA Astrophysics Data System (ADS)

    Robertson, Robert V.

    Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex

  11. Physical activity into the meal glucose-insulin model of type 1 diabetes: in silico studies.

    PubMed

    Man, Chiara Dalla; Breton, Marc D; Cobelli, Claudio

    2009-01-01

    A simulation model of a glucose-insulin system accounting for physical activity is needed to reliably simulate normal life conditions, thus accelerating the development of an artificial pancreas. In fact, exercise causes a transient increase of insulin action and may lead to hypoglycemia. However, physical activity is difficult to model. In the past, it was described indirectly as a rise in insulin. Recently, a new parsimonious model of exercise effect on glucose homeostasis has been proposed that links the change in insulin action and glucose effectiveness to heart rate (HR). The aim of this study was to plug this exercise model into our recently proposed large-scale simulation model of glucose metabolism in type 1 diabetes to better describe normal life conditions. The exercise model describes changes in glucose-insulin dynamics in two phases: a rapid on-and-off change in insulin-independent glucose clearance and a rapid-on/slow-off change in insulin sensitivity. Three candidate models of glucose effectiveness and insulin sensitivity as a function of HR have been considered, both during exercise and recovery after exercise. By incorporating these three models into the type 1 diabetes model, we simulated different levels (from mild to moderate) and duration of exercise (15 and 30 minutes), both in steady-state (e.g., during euglycemic-hyperinsulinemic clamp) and in nonsteady state (e.g., after a meal) conditions. One candidate exercise model was selected as the most reliable. A type 1 diabetes model also describing physical activity is proposed. The model represents a step forward to accurately describe glucose homeostasis in normal life conditions; however, further studies are needed to validate it against data. © Diabetes Technology Society

  12. Physical Modeling of Microtubules Network

    NASA Astrophysics Data System (ADS)

    Allain, Pierre; Kervrann, Charles

    2014-10-01

    Microtubules (MT) are highly dynamic tubulin polymers that are involved in many cellular processes such as mitosis, intracellular cell organization and vesicular transport. Nevertheless, the modeling of cytoskeleton and MT dynamics based on physical properties is difficult to achieve. Using the Euler-Bernoulli beam theory, we propose to model the rigidity of microtubules on a physical basis using forces, mass and acceleration. In addition, we link microtubules growth and shrinkage to the presence of molecules (e.g. GTP-tubulin) in the cytosol. The overall model enables linking cytosol to microtubules dynamics in a constant state space thus allowing usage of data assimilation techniques.

  13. Helicopter flight dynamics simulation with a time-accurate free-vortex wake model

    NASA Astrophysics Data System (ADS)

    Ribera, Maria

    This dissertation describes the implementation and validation of a coupled rotor-fuselage simulation model with a time-accurate free-vortex wake model capable of capturing the response to maneuvers of arbitrary amplitude. The resulting model has been used to analyze different flight conditions, including both steady and transient maneuvers. The flight dynamics model is based on a system of coupled nonlinear rotor-fuselage differential equations in first-order, state-space form. The rotor model includes flexible blades, with coupled flap-lag-torsion dynamics and swept tips; the rigid body dynamics are modeled with the non-linear Euler equations. The free wake models the rotor flow field by tracking the vortices released at the blade tips. Their behavior is described by the equations of vorticity transport, which is approximated using finite differences, and solved using a time-accurate numerical scheme. The flight dynamics model can be solved as a system of non-linear algebraic trim equations to determine the steady state solution, or integrated in time in response to pilot-applied controls. This study also implements new approaches to reduce the prohibitive computational costs associated with such complex models without losing accuracy. The mathematical model was validated for trim conditions in level flight, turns, climbs and descents. The results obtained correlate well with flight test data, both in level flight as well as turning and climbing and descending flight. The swept tip model was also found to improve the trim predictions, particularly at high speed. The behavior of the rigid body and the rotor blade dynamics were also studied and related to the aerodynamic load distributions obtained with the free wake induced velocities. The model was also validated in a lateral maneuver from hover. The results show improvements in the on-axis prediction, and indicate a possible relation between the off-axis prediction and the lack of rotor-body interaction

  14. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  15. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    NASA Astrophysics Data System (ADS)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  16. Engaging Students In Modeling Instruction for Introductory Physics

    NASA Astrophysics Data System (ADS)

    Brewe, Eric

    2016-05-01

    Teaching introductory physics is arguably one of the most important things that a physics department does. It is the primary way that students from other science disciplines engage with physics and it is the introduction to physics for majors. Modeling instruction is an active learning strategy for introductory physics built on the premise that science proceeds through the iterative process of model construction, development, deployment, and revision. We describe the role that participating in authentic modeling has in learning and then explore how students engage in this process in the classroom. In this presentation, we provide a theoretical background on models and modeling and describe how these theoretical elements are enacted in the introductory university physics classroom. We provide both quantitative and video data to link the development of a conceptual model to the design of the learning environment and to student outcomes. This work is supported in part by DUE #1140706.

  17. Measuring Global Physical Health in Children with Cerebral Palsy: Illustration of a Multidimensional Bi-factor Model and Computerized Adaptive Testing

    PubMed Central

    Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.; Hambleton, Ronald K.; Montpetit, Kathleen; Bilodeau, Nathalie; Gorton, George E.; Watson, Kyle; Tucker, Carole A

    2009-01-01

    Purpose The purpose of this study was to apply a bi-factor model for the determination of test dimensionality and a multidimensional CAT using computer simulations of real data for the assessment of a new global physical health measure for children with cerebral palsy (CP). Methods Parent respondents of 306 children with cerebral palsy were recruited from four pediatric rehabilitation hospitals and outpatient clinics. We compared confirmatory factor analysis results across four models: (1) one-factor unidimensional; (2) two-factor multidimensional (MIRT); (3) bi-factor MIRT with fixed slopes; and (4) bi-factor MIRT with varied slopes. We tested whether the general and content (fatigue and pain) person score estimates could discriminate across severity and types of CP, and whether score estimates from a simulated CAT were similar to estimates based on the total item bank, and whether they correlated as expected with external measures. Results Confirmatory factor analysis suggested separate pain and fatigue sub-factors; all 37 items were retained in the analyses. From the bi-factor MIRT model with fixed slopes, the full item bank scores discriminated across levels of severity and types of CP, and compared favorably to external instruments. CAT scores based on 10- and 15-item versions accurately captured the global physical health scores. Conclusions The bi-factor MIRT CAT application, especially the 10- and 15-item version, yielded accurate global physical health scores that discriminated across known severity groups and types of CP, and correlated as expected with concurrent measures. The CATs have potential for collecting complex data on the physical health of children with CP in an efficient manner. PMID:19221892

  18. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGES

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  19. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  20. Communication: a density functional with accurate fractional-charge and fractional-spin behaviour for s-electrons.

    PubMed

    Johnson, Erin R; Contreras-García, Julia

    2011-08-28

    We develop a new density-functional approach combining physical insight from chemical structure with treatment of multi-reference character by real-space modeling of the exchange-correlation hole. We are able to recover, for the first time, correct fractional-charge and fractional-spin behaviour for atoms of groups 1 and 2. Based on Becke's non-dynamical correlation functional [A. D. Becke, J. Chem. Phys. 119, 2972 (2003)] and explicitly accounting for core-valence separation and pairing effects, this method is able to accurately describe dissociation and strong correlation in s-shell many-electron systems. © 2011 American Institute of Physics

  1. Evaluating a Model of Youth Physical Activity

    PubMed Central

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2011-01-01

    Objective To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a sample of youth aged 10–17 years (N=720). Results Peer support, parent physical activity, and perceived barriers were directly related to youth activity. The proposed model accounted for 14.7% of the variance in physical activity. Conclusions The results demonstrate a need to further explore additional individual, social, and environmental factors that may influence youth’s regular participation in physical activity. PMID:20524889

  2. Modelling Students' Construction of Energy Models in Physics.

    ERIC Educational Resources Information Center

    Devi, Roshni; And Others

    1996-01-01

    Examines students' construction of experimentation models for physics theories in energy storage, transformation, and transfers involving electricity and mechanics. Student problem solving dialogs and artificial intelligence modeling of these processes is analyzed. Construction of models established relations between elements with linear causal…

  3. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever

    PubMed Central

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J.; Scott, Dana P.; Feldmann, Heinz; Ebihara, Hideki

    2016-01-01

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF. PMID:27976688

  4. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.

    PubMed

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki

    2016-12-15

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.

  5. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.

  6. Computer-based personality judgments are more accurate than those made by humans

    PubMed Central

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  7. Slush Fund: Modeling the Multiphase Physics of Oceanic Ices

    NASA Astrophysics Data System (ADS)

    Buffo, J.; Schmidt, B. E.

    2016-12-01

    The prevalence of ice interacting with an ocean, both on Earth and throughout the solar system, and its crucial role as the mediator of exchange between the hydrosphere below and atmosphere above, have made quantifying the thermodynamic, chemical, and physical properties of the ice highly desirable. While direct observations of these quantities exist, their scarcity increases with the difficulty of obtainment; the basal surfaces of terrestrial ice shelves remain largely unexplored and the icy interiors of moons like Europa and Enceladus have never been directly observed. Our understanding of these entities thus relies on numerical simulation, and the efficacy of their incorporation into larger systems models is dependent on the accuracy of these initial simulations. One characteristic of seawater, likely shared by the oceans of icy moons, is that it is a solution. As such, when it is frozen a majority of the solute is rejected from the forming ice, concentrating in interstitial pockets and channels, producing a two-component reactive porous media known as a mushy layer. The multiphase nature of this layer affects the evolution and dynamics of the overlying ice mass. Additionally ice can form in the water column and accrete onto the basal surface of these ice masses via buoyancy driven sedimentation as frazil or platelet ice. Numerical models hoping to accurately represent ice-ocean interactions should include the multiphase behavior of these two phenomena. While models of sea ice have begun to incorporate multiphase physics into their capabilities, no models of ice shelves/shells explicitly account for the two-phase behavior of the ice-ocean interface. Here we present a 1D multiphase model of floating oceanic ice that includes parameterizations of both density driven advection within the `mushy layer' and buoyancy driven sedimentation. The model is validated against contemporary sea ice models and observational data. Environmental stresses such as supercooling and

  8. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  9. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  10. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  11. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  12. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  13. Insights on multivariate updates of physical and biogeochemical ocean variables using an Ensemble Kalman Filter and an idealized model of upwelling

    NASA Astrophysics Data System (ADS)

    Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.

    2018-06-01

    Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an

  14. High-frequency techniques for RCS prediction of plate geometries and a physical optics/equivalent currents model for the RCS of trihedral corner reflectors, parts 1 and 2

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polka, Lesley A.; Polycarpou, Anastasis C.

    1994-01-01

    Formulations for scattering from the coated plate and the coated dihedral corner reflector are included. A coated plate model based upon the Uniform Theory of Diffraction (UTD) for impedance wedges was presented in the last report. In order to resolve inaccuracies and discontinuities in the predicted patterns using the UTD-based model, an improved model that uses more accurate diffraction coefficients is presented. A Physical Optics (PO) model for the coated dihedral corner reflector is presented as an intermediary step in developing a high-frequency model for this structure. The PO model is based upon the reflection coefficients for a metal-backed lossy material. Preliminary PO results for the dihedral corner reflector suggest that, in addition to being much faster computationally, this model may be more accurate than existing moment method (MM) models. An improved Physical Optics (PO)/Equivalent Currents model for modeling the Radar Cross Section (RCS) of both square and triangular, perfectly conducting, trihedral corner reflectors is presented. The new model uses the PO approximation at each reflection for the first- and second-order reflection terms. For the third-order reflection terms, a Geometrical Optics (GO) approximation is used for the first reflection; and PO approximations are used for the remaining reflections. The previously reported model used GO for all reflections except the terminating reflection. Using PO for most of the reflections results in a computationally slower model because many integrations must be performed numerically, but the advantage is that the predicted RCS using the new model is much more accurate. Comparisons between the two PO models, Finite-Difference Time-Domain (FDTD) and experimental data are presented for validation of the new model.

  15. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    NASA Astrophysics Data System (ADS)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  16. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  17. Multi-representation ability of students on the problem solving physics

    NASA Astrophysics Data System (ADS)

    Theasy, Y.; Wiyanto; Sujarwata

    2018-03-01

    Accuracy in representing knowledge possessed by students will show how the level of student understanding. The multi-representation ability of students on the problem solving of physics has been done through qualitative method of grounded theory model and implemented on physics education student of Unnes academic year 2016/2017. Multiforms of representation used are verbal (V), images/diagrams (D), graph (G), and mathematically (M). High and low category students have an accurate use of graphical representation (G) of 83% and 77.78%, and medium category has accurate use of image representation (D) equal to 66%.

  18. Modelling accumulation of marine plastics in the coastal zone; what are the dominant physical processes?

    NASA Astrophysics Data System (ADS)

    Critchell, Kay; Lambrechts, Jonathan

    2016-03-01

    Anthropogenic marine debris, mainly of plastic origin, is accumulating in estuarine and coastal environments around the world causing damage to fauna, flora and habitats. Plastics also have the potential to accumulate in the food web, as well as causing economic losses to tourism and sea-going industries. If we are to manage this increasing threat, we must first understand where debris is accumulating and why these locations are different to others that do not accumulate large amounts of marine debris. This paper demonstrates an advection-diffusion model that includes beaching, settling, resuspension/re-floating, degradation and topographic effects on the wind in nearshore waters to quantify the relative importance of these physical processes governing plastic debris accumulation. The aim of this paper is to prioritise research that will improve modelling outputs in the future. We have found that the physical characteristic of the source location has by far the largest effect on the fate of the debris. The diffusivity, used to parameterise the sub-grid scale movements, and the relationship between debris resuspension/re-floating from beaches and the wind shadow created by high islands also has a dramatic impact on the modelling results. The rate of degradation of macroplastics into microplastics also have a large influence in the result of the modelling. The other processes presented (settling, wind drift velocity) also help determine the fate of debris, but to a lesser degree. These findings may help prioritise research on physical processes that affect plastic accumulation, leading to more accurate modelling, and subsequently management in the future.

  19. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  20. Modellus: Learning Physics with Mathematical Modelling

    NASA Astrophysics Data System (ADS)

    Teodoro, Vitor

    Computers are now a major tool in research and development in almost all scientific and technological fields. Despite recent developments, this is far from true for learning environments in schools and most undergraduate studies. This thesis proposes a framework for designing curricula where computers, and computer modelling in particular, are a major tool for learning. The framework, based on research on learning science and mathematics and on computer user interface, assumes that: 1) learning is an active process of creating meaning from representations; 2) learning takes place in a community of practice where students learn both from their own effort and from external guidance; 3) learning is a process of becoming familiar with concepts, with links between concepts, and with representations; 4) direct manipulation user interfaces allow students to explore concrete-abstract objects such as those of physics and can be used by students with minimal computer knowledge. Physics is the science of constructing models and explanations about the physical world. And mathematical models are an important type of models that are difficult for many students. These difficulties can be rooted in the fact that most students do not have an environment where they can explore functions, differential equations and iterations as primary objects that model physical phenomena--as objects-to-think-with, reifying the formal objects of physics. The framework proposes that students should be introduced to modelling in a very early stage of learning physics and mathematics, two scientific areas that must be taught in very closely related way, as they were developed since Galileo and Newton until the beginning of our century, before the rise of overspecialisation in science. At an early stage, functions are the main type of objects used to model real phenomena, such as motions. At a later stage, rates of change and equations with rates of change play an important role. This type of equations

  1. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  2. Modeling physical vapor deposition of energetic materials

    DOE PAGES

    Shirvan, Koroush; Forrest, Eric C.

    2018-03-28

    Morphology and microstructure of organic explosive films formed using physical vapor deposition (PVD) processes strongly depends on local surface temperature during deposition. Currently, there is no accurate means of quantifying the local surface temperature during PVD processes in the deposition chambers. This study focuses on using a multiphysics computational fluid dynamics tool, STARCCM+, to simulate pentaerythritol tetranitrate (PETN) deposition. The PETN vapor and solid phase were simulated using the volume of fluid method and its deposition in the vacuum chamber on spinning silicon wafers was modeled. The model also included the spinning copper cooling block where the wafers are placedmore » along with the chiller operating with forced convection refrigerant. Implicit time-dependent simulations in two- and three-dimensional were performed to derive insights in the governing physics for PETN thin film formation. PETN is deposited at the rate of 14 nm/s at 142.9 °C on a wafer with an initial temperature of 22 °C. The deposition of PETN on the wafers was calculated at an assumed heat transfer coefficient (HTC) of 400 W/m 2 K. This HTC proved to be the most sensitive parameter in determining the local surface temperature during deposition. Previous experimental work found noticeable microstructural changes with 0.5 mm fused silica wafers in place of silicon during the PETN deposition. This work showed that fused silica slows initial wafer cool down and results in ~10 °C difference for the surface temperature at 500 μm PETN film thickness. It was also found that the deposition surface temperature is insensitive to the cooling power of the copper block due to the copper block's very large heat capacity and thermal conductivity relative to the heat input from the PVD process. Future work should incorporate the addition of local stress during PETN deposition. Lastly, based on simulation results, it is also recommended to investigate the impact of wafer

  3. Modeling physical vapor deposition of energetic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirvan, Koroush; Forrest, Eric C.

    Morphology and microstructure of organic explosive films formed using physical vapor deposition (PVD) processes strongly depends on local surface temperature during deposition. Currently, there is no accurate means of quantifying the local surface temperature during PVD processes in the deposition chambers. This study focuses on using a multiphysics computational fluid dynamics tool, STARCCM+, to simulate pentaerythritol tetranitrate (PETN) deposition. The PETN vapor and solid phase were simulated using the volume of fluid method and its deposition in the vacuum chamber on spinning silicon wafers was modeled. The model also included the spinning copper cooling block where the wafers are placedmore » along with the chiller operating with forced convection refrigerant. Implicit time-dependent simulations in two- and three-dimensional were performed to derive insights in the governing physics for PETN thin film formation. PETN is deposited at the rate of 14 nm/s at 142.9 °C on a wafer with an initial temperature of 22 °C. The deposition of PETN on the wafers was calculated at an assumed heat transfer coefficient (HTC) of 400 W/m 2 K. This HTC proved to be the most sensitive parameter in determining the local surface temperature during deposition. Previous experimental work found noticeable microstructural changes with 0.5 mm fused silica wafers in place of silicon during the PETN deposition. This work showed that fused silica slows initial wafer cool down and results in ~10 °C difference for the surface temperature at 500 μm PETN film thickness. It was also found that the deposition surface temperature is insensitive to the cooling power of the copper block due to the copper block's very large heat capacity and thermal conductivity relative to the heat input from the PVD process. Future work should incorporate the addition of local stress during PETN deposition. Lastly, based on simulation results, it is also recommended to investigate the impact of wafer

  4. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  5. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  6. Dilution physics modeling: Dissolution/precipitation chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Y.; Reid, H.C.; Trent, D.S.

    This report documents progress made to date on integrating dilution/precipitation chemistry and new physical models into the TEMPEST thermal-hydraulics computer code. Implementation of dissolution/precipitation chemistry models is necessary for predicting nonhomogeneous, time-dependent, physical/chemical behavior of tank wastes with and without a variety of possible engineered remediation and mitigation activities. Such behavior includes chemical reactions, gas retention, solids resuspension, solids dissolution and generation, solids settling/rising, and convective motion of physical and chemical species. Thus this model development is important from the standpoint of predicting the consequences of various engineered activities, such as mitigation by dilution, retrieval, or pretreatment, that can affectmore » safe operations. The integration of a dissolution/precipitation chemistry module allows the various phase species concentrations to enter into the physical calculations that affect the TEMPEST hydrodynamic flow calculations. The yield strength model of non-Newtonian sludge correlates yield to a power function of solids concentration. Likewise, shear stress is concentration-dependent, and the dissolution/precipitation chemistry calculations develop the species concentration evolution that produces fluid flow resistance changes. Dilution of waste with pure water, molar concentrations of sodium hydroxide, and other chemical streams can be analyzed for the reactive species changes and hydrodynamic flow characteristics.« less

  7. Composing Models of Geographic Physical Processes

    NASA Astrophysics Data System (ADS)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  8. Accurate Determination of the Values of Fundamental Physical Constants: The Basis of the New "Quantum" SI Units

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2018-03-01

    standards (such as the International prototype of the kilogram) and the isotopic composition of substances involved in precision studies in general (as standard measures for the triple point of water) and, in particular, in the determination of the fundamental constants are discussed. The perspectives of the introduction of the new quantum units, which will be free from the mentioned problems, are considered. Many physicists feel no sympathy for the International system of units (SI), believing that it does not properly reflect the character of physical laws. In fact, there are three parallel systems, namely the systems of quantities, system of their units and the related standards. The definition of the units, in particular, the SI units, above all, reflects our ability to perform precision measurements of physical values under certain conditions, in particular, to create appropriate standards. This requirement is not related to the beauty of fundamental laws of nature. More accurate determination of the fundamental constants is one of the areas where we accumulate such experience.

  9. A physics department's role in preparing physics teachers: The Colorado learning assistant model

    NASA Astrophysics Data System (ADS)

    Otero, Valerie; Pollock, Steven; Finkelstein, Noah

    2010-11-01

    In response to substantial evidence that many U.S. students are inadequately prepared in science and mathematics, we have developed an effective and adaptable model that improves the education of all students in introductory physics and increases the numbers of talented physics majors becoming certified to teach physics. We report on the Colorado Learning Assistant model and discuss its effectiveness at a large research university. Since its inception in 2003, we have increased the pool of well-qualified K-12 physics teachers by a factor of approximately three, engaged scientists significantly in the recruiting and preparation of future teachers, and improved the introductory physics sequence so that students' learning gains are typically double the traditional average.

  10. Evaluating a Model of Youth Physical Activity

    ERIC Educational Resources Information Center

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  11. Coarsening of physics for biogeochemical model in NEMO

    NASA Astrophysics Data System (ADS)

    Bricaud, Clement; Le Sommer, Julien; Madec, Gurvan; Deshayes, Julie; Chanut, Jerome; Perruche, Coralie

    2017-04-01

    Ocean mesoscale and submesoscale turbulence contribute to ocean tracer transport and to shaping ocean biogeochemical tracers distribution. Representing adequately tracer transport in ocean models therefore requires to increase model resolution so that the impact of ocean turbulence is adequately accounted for. But due to supercomputers power and storage limitations, global biogeochemical models are not yet run routinely at eddying resolution. Still, because the "effective resolution" of eddying ocean models is much coarser than the physical model grid resolution, tracer transport can be reconstructed to a large extent by computing tracer transport and diffusion with a model grid resolution close to the effective resolution of the physical model. This observation has motivated the implementation of a new capability in NEMO ocean model (http://www.nemo-ocean.eu/) that allows to run the physical model and the tracer transport model at different grid resolutions. In a first time, we present results obtained with this new capability applied to a synthetic age tracer in a global eddying model configuration. In this model configuration, ocean dynamic is computed at ¼° resolution but tracer transport is computed at 3/4° resolution. The solution obtained is compared to 2 reference setup ,one at ¼° resolution for both physics and passive tracer models and one at 3/4° resolution for both physics and passive tracer model. We discuss possible options for defining the vertical diffusivity coefficient for the tracer transport model based on information from the high resolution grid. We describe the impact of this choice on the distribution and one the penetration of the age tracer. In a second time we present results obtained by coupling the physics with the biogeochemical model PISCES. We look at the impact of this methodology on some tracers distribution and dynamic. The method described here can found applications in ocean forecasting, such as the Copernicus Marine

  12. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    NASA Astrophysics Data System (ADS)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  13. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  14. Accurate prediction of energy expenditure using a shoe-based activity monitor.

    PubMed

    Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward

    2011-07-01

    The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.

  15. Supersymmetry and Kaon physics

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kei

    2017-01-01

    Kaon physics has played an essential role in testing the Standard Model and in searching for new physics with measurements of CP violation and rare decays. Current progress of lattice calculations enables us to predict kaon observables accurately, especially for the direct CP violation, ε‧/ε, and there is a discrepancy from the experimental data at the 2.9 σ level. On the experimental side, the rare kaon decays and are ongoing to be measured at the SM accuracy by KOTO at J-PARC and NA62 at CERN. These kaon observables are good probes for new physics. We study supersymmetric effects; the chargino and gluino contributions to Z penguin, in kaon observables.

  16. Propulsion Physics Under the Changing Density Field Model

    NASA Technical Reports Server (NTRS)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  17. Tactile Teaching: Exploring Protein Structure/Function Using Physical Models

    ERIC Educational Resources Information Center

    Herman, Tim; Morris, Jennifer; Colton, Shannon; Batiza, Ann; Patrick, Michael; Franzen, Margaret; Goodsell, David S.

    2006-01-01

    The technology now exists to construct physical models of proteins based on atomic coordinates of solved structures. We review here our recent experiences in using physical models to teach concepts of protein structure and function at both the high school and the undergraduate levels. At the high school level, physical models are used in a…

  18. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  19. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  20. Pre-Service Physics Teachers' Knowledge of Models and Perceptions of Modelling

    ERIC Educational Resources Information Center

    Ogan-Bekiroglu, Feral

    2006-01-01

    One of the purposes of this study was to examine the differences between knowledge of pre-service physics teachers who experienced model-based teaching in pre-service education and those who did not. Moreover, it was aimed to determine pre-service physics teachers' perceptions of modelling. Posttest-only control group experimental design was used…

  1. A physically-based continuum damage mechanics model for numerical prediction of damage growth in laminated composite plates

    NASA Astrophysics Data System (ADS)

    Williams, Kevin Vaughan

    Rapid growth in use of composite materials in structural applications drives the need for a more detailed understanding of damage tolerant and damage resistant design. Current analytical techniques provide sufficient understanding and predictive capabilities for application in preliminary design, but current numerical models applicable to composites are few and far between and their development into well tested, rigorous material models is currently one of the most challenging fields in composite materials. The present work focuses on the development, implementation, and verification of a plane-stress continuum damage mechanics based model for composite materials. A physical treatment of damage growth based on the extensive body of experimental literature on the subject is combined with the mathematical rigour of a continuum damage mechanics description to form the foundation of the model. The model has been implemented in the LS-DYNA3D commercial finite element hydrocode and the results of the application of the model are shown to be physically meaningful and accurate. Furthermore it is demonstrated that the material characterization parameters can be extracted from the results of standard test methodologies for which a large body of published data already exists for many materials. Two case studies are undertaken to verify the model by comparison with measured experimental data. The first series of analyses demonstrate the ability of the model to predict the extent and growth of damage in T800/3900-2 carbon fibre reinforced polymer (CFRP) plates subjected to normal impacts over a range of impact energy levels. The predicted force-time and force-displacement response of the panels compare well with experimental measurements. The damage growth and stiffness reduction properties of the T800/3900-2 CFRP are derived using published data from a variety of sources without the need for parametric studies. To further demonstrate the physical nature of the model, a IM6

  2. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and

  3. Do dual-route models accurately predict reading and spelling performance in individuals with acquired alexia and agraphia?

    PubMed

    Rapcsak, Steven Z; Henry, Maya L; Teague, Sommer L; Carnahan, Susan D; Beeson, Pélagie M

    2007-06-18

    Coltheart and co-workers [Castles, A., Bates, T. C., & Coltheart, M. (2006). John Marshall and the developmental dyslexias. Aphasiology, 20, 871-892; Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256] have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper, we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult neurological patients with acquired alexia and agraphia. These findings provide empirical support for dual-route theories of written language processing.

  4. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  5. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  6. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Physics based modeling of axial compressor stall

    NASA Astrophysics Data System (ADS)

    Zaki, Mina Adel

    2009-12-01

    Axial compressors are used in a wide variety of aerodynamic applications and are one of the most important components in aero-engines. However, the operability of compressors is limited at low-mass flow rates by fluid dynamic instabilities such as stall and surge. These instabilities can lead to engine failure and loss of engine power which can compromise the aircraft safety and reliability. Thus, a better understanding of how stall occurs and the causes behind its inception is extremely important. In the vicinity of the stall line, the flow field is inherently unsteady due to the interactions between adjacent rows of blades, formation of separation cells, and the viscous effects including shock-boundary layer interactions. Accurate modeling of these phenomena requires a proper set of stable and accurate boundary conditions at the rotor-stator interface that conserve mass, momentum, and energy, while eliminating false reflections. As a part of this research effort, an existing 3-D Navier-Stokes analysis for modeling single stage compressors has been modified to model multi-stage axial compressors and turbines. Several rotor-stator interface boundary conditions have been implemented. These conditions have been evaluated for the first stage (a stator and a rotor) of the two-stage fuel turbine on the space shuttle main engine (SSME). Their effectiveness in conserving global properties such as mass, momentum, and energy across the interface while yielding good performance predictions has been evaluated. While all the methods gave satisfactory results, a characteristic based approach and an unsteady sliding mesh approach are found to work best. Accurate modeling of the formation of stall cells requires the use of advanced turbulence models. As a part of this effort, a new advanced turbulence model called the Hybrid RANS/KES (HRKES) model has been developed and implemented. This model solves the Menter's k-o-SST model near walls and switches to the Kinetic Eddy

  8. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method ismore » implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.« less

  9. Tidal Simulations of an Incised-Valley Fluvial System with a Physics-Based Geologic Model

    NASA Astrophysics Data System (ADS)

    Ghayour, K.; Sun, T.

    2012-12-01

    Physics-based geologic modeling approaches use fluid flow in conjunction with sediment transport and deposition models to devise evolutionary geologic models that focus on underlying physical processes and attempt to resolve them at pertinent spatial and temporal scales. Physics-based models are particularly useful when the evolution of a depositional system is driven by the interplay of autogenic processes and their response to allogenic controls. This interplay can potentially create complex reservoir architectures with high permeability sedimentary bodies bounded by a hierarchy of shales that can effectively impede flow in the subsurface. The complex stratigraphy of tide-influenced fluvial systems is an example of such co-existing and interacting environments of deposition. The focus of this talk is a novel formulation of boundary conditions for hydrodynamics-driven models of sedimentary systems. In tidal simulations, a time-accurate boundary treatment is essential for proper imposition of tidal forcing and fluvial inlet conditions where the flow may be reversed at times within a tidal cycle. As such, the boundary treatment at the inlet has to accommodate for a smooth transition from inflow to outflow and vice-versa without creating numerical artifacts. Our numerical experimentations showed that boundary condition treatments based on a local (frozen) one-dimensional approach along the boundary normal which does not account for the variation of flow quantities in the tangential direction often lead to unsatisfactory results corrupted by numerical artifacts. In this talk, we propose a new boundary treatment that retains all spatial and temporal terms in the model and as such is capable to account for nonlinearities and sharp variations of model variables near boundaries. The proposed approach borrows heavily from the idea set forth by J. Sesterhenn1 for compressible Navier-Stokes equations. The methodology is successfully applied to a tide-influenced incised

  10. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Lin, Shian-Jiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  11. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Lin, Shian-Kiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  12. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  13. A Physically Based Coupled Chemical and Physical Weathering Model for Simulating Soilscape Evolution

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Welivitiya, D.; Hancock, G. R.

    2015-12-01

    A critical missing link in existing landscape evolution models is a dynamic soil evolution models where soils co-evolve with the landform. Work by the authors over the last decade has demonstrated a computationally manageable model for soil profile evolution (soilscape evolution) based on physical weathering. For chemical weathering it is clear that full geochemistry models such as CrunchFlow and PHREEQC are too computationally intensive to be couplable to existing soilscape and landscape evolution models. This paper presents a simplification of CrunchFlow chemistry and physics that makes the task feasible, and generalises it for hillslope geomorphology applications. Results from this simplified model will be compared with field data for soil pedogenesis. Other researchers have previously proposed a number of very simple weathering functions (e.g. exponential, humped, reverse exponential) as conceptual models of the in-profile weathering process. The paper will show that all of these functions are possible for specific combinations of in-soil environmental, geochemical and geologic conditions, and the presentation will outline the key variables controlling which of these conceptual models can be realistic models of in-profile processes and under what conditions. The presentation will finish by discussing the coupling of this model with a physical weathering model, and will show sample results from our SSSPAM soilscape evolution model to illustrate the implications of including chemical weathering in the soilscape evolution model.

  14. Neutron Reflectivity as a Tool for Physics-Based Studies of Model Bacterial Membranes.

    PubMed

    Barker, Robert D; McKinley, Laura E; Titmuss, Simon

    2016-01-01

    The principles of neutron reflectivity and its application as a tool to provide structural information at the (sub-) molecular unit length scale from models for bacterial membranes are described. The model membranes can take the form of a monolayer for a single leaflet spread at the air/water interface, or bilayers of increasing complexity at the solid/liquid interface. Solid-supported bilayers constrain the bilayer to 2D but can be used to characterize interactions with antimicrobial peptides and benchmark high throughput lab-based techniques. Floating bilayers allow for membrane fluctuations, making the phase behaviour more representative of native membranes. Bilayers of varying levels of compositional accuracy can now be constructed, facilitating studies with aims that range from characterizing the fundamental physical interactions, through to the characterization of accurate mimetics for the inner and outer membranes of Gram-negative bacteria. Studies of the interactions of antimicrobial peptides with monolayer and bilayer models for the inner and outer membranes have revealed information about the molecular control of the outer membrane permeability, and the mode of interaction of antimicrobials with both inner and outer membranes.

  15. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    NASA Astrophysics Data System (ADS)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  16. Currency target-zone modeling: An interplay between physics and economics.

    PubMed

    Lera, Sandro Claudio; Sornette, Didier

    2015-12-01

    We study the performance of the euro-Swiss franc exchange rate in the extraordinary period from September 6, 2011 to January 15, 2015 when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. Within the general framework built on geometric Brownian motions and based on the analogy between Brownian motion in finance and physics, the first-order effect of such a steric constraint would enter a priori in the form of a repulsive entropic force associated with the paths crossing the barrier that are forbidden. Nonparametric empirical estimates of drift and volatility show that the predicted first-order analogy between economics and physics is incorrect. The clue is to realize that the random-walk nature of financial prices results from the continuous anticipation of traders about future opportunities, whose aggregate actions translate into an approximate efficient market with almost no arbitrage opportunities. With the Swiss National Bank's stated commitment to enforce the barrier, traders' anticipation of this action leads to a vanishing drift together with a volatility of the exchange rate that depends on the distance to the barrier. This effect is described by Krugman's model [P. R. Krugman, Target zones and exchange rate dynamics, Q. J. Econ. 106, 669 (1991)]. We present direct quantitative empirical evidence that Krugman's theoretical model provides an accurate description of the euro-Swiss franc target zone. Motivated by the insights from the economic model, we revise the initial economics-physics analogy and show that, within the context of hindered diffusion, the two systems can be described with the same mathematics after all. Using a recently proposed extended analogy in terms of a colloidal Brownian particle embedded in a fluid of molecules associated with the underlying order book, we derive that, close to the restricting boundary, the dynamics of both systems is described by a stochastic differential equation with a very

  17. Currency target-zone modeling: An interplay between physics and economics

    NASA Astrophysics Data System (ADS)

    Lera, Sandro Claudio; Sornette, Didier

    2015-12-01

    We study the performance of the euro-Swiss franc exchange rate in the extraordinary period from September 6, 2011 to January 15, 2015 when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. Within the general framework built on geometric Brownian motions and based on the analogy between Brownian motion in finance and physics, the first-order effect of such a steric constraint would enter a priori in the form of a repulsive entropic force associated with the paths crossing the barrier that are forbidden. Nonparametric empirical estimates of drift and volatility show that the predicted first-order analogy between economics and physics is incorrect. The clue is to realize that the random-walk nature of financial prices results from the continuous anticipation of traders about future opportunities, whose aggregate actions translate into an approximate efficient market with almost no arbitrage opportunities. With the Swiss National Bank's stated commitment to enforce the barrier, traders' anticipation of this action leads to a vanishing drift together with a volatility of the exchange rate that depends on the distance to the barrier. This effect is described by Krugman's model [P. R. Krugman, Target zones and exchange rate dynamics, Q. J. Econ. 106, 669 (1991), 10.2307/2937922]. We present direct quantitative empirical evidence that Krugman's theoretical model provides an accurate description of the euro-Swiss franc target zone. Motivated by the insights from the economic model, we revise the initial economics-physics analogy and show that, within the context of hindered diffusion, the two systems can be described with the same mathematics after all. Using a recently proposed extended analogy in terms of a colloidal Brownian particle embedded in a fluid of molecules associated with the underlying order book, we derive that, close to the restricting boundary, the dynamics of both systems is described by a stochastic differential

  18. Accurate Modeling of Dark-Field Scattering Spectra of Plasmonic Nanostructures.

    PubMed

    Jiang, Liyong; Yin, Tingting; Dong, Zhaogang; Liao, Mingyi; Tan, Shawn J; Goh, Xiao Ming; Allioux, David; Hu, Hailong; Li, Xiangyin; Yang, Joel K W; Shen, Zexiang

    2015-10-27

    Dark-field microscopy is a widely used tool for measuring the optical resonance of plasmonic nanostructures. However, current numerical methods for simulating the dark-field scattering spectra were carried out with plane wave illumination either at normal incidence or at an oblique angle from one direction. In actual experiments, light is focused onto the sample through an annular ring within a range of glancing angles. In this paper, we present a theoretical model capable of accurately simulating the dark-field light source with an annular ring. Simulations correctly reproduce a counterintuitive blue shift in the scattering spectra from gold nanodisks with a diameter beyond 140 nm. We believe that our proposed simulation method can be potentially applied as a general tool capable of simulating the dark-field scattering spectra of plasmonic nanostructures as well as other dielectric nanostructures with sizes beyond the quasi-static limit.

  19. Passive Optical Technique to Measure Physical Properties of a Vibrating Surface

    DTIC Science & Technology

    2014-01-01

    it is not necessary to understand the details of a non-Lambertian BRDF to detect surface vibration phenomena, an accurate model incorporating physics...summarize the discussion of BRDF , while a physics-based BRDF model is not necessary to use scattered light as a surface vibration diagnostic, it may...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2014 2

  20. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Modelling Mathematical Reasoning in Physics Education

    NASA Astrophysics Data System (ADS)

    Uhden, Olaf; Karam, Ricardo; Pietrocola, Maurício; Pospiech, Gesche

    2012-04-01

    Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.

  2. A Structural Equation Model of Expertise in College Physics

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Carr, Martha

    2009-01-01

    A model of expertise in physics was tested on a sample of 374 college students in 2 different level physics courses. Structural equation modeling was used to test hypothesized relationships among variables linked to expert performance in physics including strategy use, pictorial representation, categorization skills, and motivation, and these…

  3. A Structural Equation Model of Conceptual Change in Physics

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Sinatra, Gale M.

    2011-01-01

    A model of conceptual change in physics was tested on introductory-level, college physics students. Structural equation modeling was used to test hypothesized relationships among variables linked to conceptual change in physics including an approach goal orientation, need for cognition, motivation, and course grade. Conceptual change in physics…

  4. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  5. Prediction of energy expenditure and physical activity in preschoolers

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) ...

  6. Teaching Einsteinian Physics at Schools: Part 2, Models and Analogies for Quantum Physics

    ERIC Educational Resources Information Center

    Kaur, Tejinder; Blair, David; Moschilla, John; Zadnik, Marjan

    2017-01-01

    The Einstein-First project approaches the teaching of Einsteinian physics through the use of physical models and analogies. This paper presents an approach to the teaching of quantum physics which begins by emphasising the particle-nature of light through the use of toy projectiles to represent photons. This allows key concepts including the…

  7. Personal health behaviors and role-modeling attitudes of physical therapists and physical therapist students: a cross-sectional study.

    PubMed

    Black, Beth; Marcoux, Beth C; Stiller, Christine; Qu, Xianggui; Gellish, Ronald

    2012-11-01

    Physical therapists have been encouraged to engage in health promotion practice. Health professionals who engage in healthy behaviors themselves are more apt to recommend those behaviors, and patients are more motivated to change their behaviors when their health care provider is a credible role model. The purpose of this study was to describe the health behaviors and role-modeling attitudes of physical therapists and physical therapist students. This study was a descriptive cross-sectional survey. A national sample of 405 physical therapists and 329 physical therapist students participated in the survey. Participants' attitudes toward role modeling and behaviors related to physical activity, fruit and vegetable consumption, abstention from smoking, and maintenance of a healthy weight were measured. Wilcoxon rank sum tests were used to examine differences in attitudes and behaviors between physical therapists and physical therapist students. A majority of the participants reported that they engage in regular physical activity (80.8%), eat fruits and vegetables (60.3%), do not smoke (99.4%), and maintain a healthy weight (78.7%). Although there were no differences in behaviors, physical therapist students were more likely to believe that role modeling is a powerful teaching tool, physical therapist professionals should "practice what they preach," physical activity is a desirable behavior, and physical therapist professionals should be role models for nonsmoking and maintaining a healthy weight. Limitations of this study include the potential for response bias and social desirability bias. Physical therapists and physical therapist students engage in health-promoting behaviors at similarly high rates but differ in role-modeling attitudes.

  8. An acoustic glottal source for vocal tract physical models

    NASA Astrophysics Data System (ADS)

    Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti

    2017-11-01

    A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.

  9. Toward a mineral physics reference model for the Moon's core.

    PubMed

    Antonangeli, Daniele; Morard, Guillaume; Schmerr, Nicholas C; Komabayashi, Tetsuya; Krisch, Michael; Fiquet, Guillaume; Fei, Yingwei

    2015-03-31

    The physical properties of iron (Fe) at high pressure and high temperature are crucial for understanding the chemical composition, evolution, and dynamics of planetary interiors. Indeed, the inner structures of the telluric planets all share a similar layered nature: a central metallic core composed mostly of iron, surrounded by a silicate mantle, and a thin, chemically differentiated crust. To date, most studies of iron have focused on the hexagonal closed packed (hcp, or ε) phase, as ε-Fe is likely stable across the pressure and temperature conditions of Earth's core. However, at the more moderate pressures characteristic of the cores of smaller planetary bodies, such as the Moon, Mercury, or Mars, iron takes on a face-centered cubic (fcc, or γ) structure. Here we present compressional and shear wave sound velocity and density measurements of γ-Fe at high pressures and high temperatures, which are needed to develop accurate seismic models of planetary interiors. Our results indicate that the seismic velocities proposed for the Moon's inner core by a recent reanalysis of Apollo seismic data are well below those of γ-Fe. Our dataset thus provides strong constraints to seismic models of the lunar core and cores of small telluric planets. This allows us to propose a direct compositional and velocity model for the Moon's core.

  10. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    ERIC Educational Resources Information Center

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  11. An easy-to-parameterise physics-informed battery model and its application towards lithium-ion battery cell design, diagnosis, and degradation

    NASA Astrophysics Data System (ADS)

    Merla, Yu; Wu, Billy; Yufit, Vladimir; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2018-04-01

    Accurate diagnosis of lithium ion battery state-of-health (SOH) is of significant value for many applications, to improve performance, extend life and increase safety. However, in-situ or in-operando diagnosis of SOH often requires robust models. There are many models available however these often require expensive-to-measure ex-situ parameters and/or contain unmeasurable parameters that were fitted/assumed. In this work, we have developed a new empirically parameterised physics-informed equivalent circuit model. Its modular construction and low-cost parametrisation requirements allow end users to parameterise cells quickly and easily. The model is accurate to 19.6 mV for dynamic loads without any global fitting/optimisation, only that of the individual elements. The consequences of various degradation mechanisms are simulated, and the impact of a degraded cell on pack performance is explored, validated by comparison with experiment. Results show that an aged cell in a parallel pack does not have a noticeable effect on the available capacity of other cells in the pack. The model shows that cells perform better when electrodes are more porous towards the separator and have a uniform particle size distribution, validated by comparison with published data. The model is provided with this publication for readers to use.

  12. Improving Simulations of Extreme Flows by Coupling a Physically-based Hydrologic Model with a Machine Learning Model

    NASA Astrophysics Data System (ADS)

    Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.

    2017-12-01

    With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967

  13. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    PubMed

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  14. Physical principles for DNA tile self-assembly.

    PubMed

    Evans, Constantine G; Winfree, Erik

    2017-06-19

    DNA tiles provide a promising technique for assembling structures with nanoscale resolution through self-assembly by basic interactions rather than top-down assembly of individual structures. Tile systems can be programmed to grow based on logical rules, allowing for a small number of tile types to assemble large, complex assemblies that can retain nanoscale resolution. Such algorithmic systems can even assemble different structures using the same tiles, based on inputs that seed the growth. While programming and theoretical analysis of tile self-assembly often makes use of abstract logical models of growth, experimentally implemented systems are governed by nanoscale physical processes that can lead to very different behavior, more accurately modeled by taking into account the thermodynamics and kinetics of tile attachment and detachment in solution. This review discusses the relationships between more abstract and more physically realistic tile assembly models. A central concern is how consideration of model differences enables the design of tile systems that robustly exhibit the desired abstract behavior in realistic physical models and in experimental implementations. Conversely, we identify situations where self-assembly in abstract models can not be well-approximated by physically realistic models, putting constraints on physical relevance of the abstract models. To facilitate the discussion, we introduce a unified model of tile self-assembly that clarifies the relationships between several well-studied models in the literature. Throughout, we highlight open questions regarding the physical principles for DNA tile self-assembly.

  15. Segmentation-less Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  16. Models Based Practices in Physical Education: A Sociocritical Reflection

    ERIC Educational Resources Information Center

    Landi, Dillon; Fitzpatrick, Katie; McGlashan, Hayley

    2016-01-01

    In this paper, we reflect on models-based practices in physical education using a sociocritical lens. Drawing links between neoliberal moves in education, and critical approaches to the body and physicality, we take a view that models are useful tools that are worth integrating into physical education, but we are apprehensive to suggest they…

  17. Rock.XML - Towards a library of rock physics models

    NASA Astrophysics Data System (ADS)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  18. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  19. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Towards accurate modeling of noncovalent interactions for protein rigidity analysis.

    PubMed

    Fox, Naomi; Streinu, Ileana

    2013-01-01

    Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future

  1. Towards accurate modeling of noncovalent interactions for protein rigidity analysis

    PubMed Central

    2013-01-01

    Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all

  2. Study on the physical and non-physical drag coefficients for spherical satellites

    NASA Astrophysics Data System (ADS)

    Man, Haijun; Li, Huijun; Tang, Geshi

    In this study, the physical and non-physical drag coefficients (C_D) for spherical satellites in ANDERR are retrieved from the number density of atomic oxygen and the orbit decay data, respectively. We concern on what changes should be taken to the retrieved physical C_D and non-physical C_D as the accuracy of the atmospheric density model is improved. Firstly, Lomb-Scargle periodograms to these C_D series as well as the environmental parameters indicate that: (1) there are obvious 5-, 7-, and 9-day periodic variations in the daily Ap indices and the solar wind speed at 1 AU as well as the model density, which has been reported as a result from the interaction between the corotating solar wind and the magnetosphere; (2) The same short periods also exist in the retrieved C_D except for the significance level for each C_D series; (3) the physical and non-physical C_D have behaved almost homogeneously with model densities along the satellite trajectory. Secondly, corrections to each type of C_D are defined as the differences between the values derived from the density model of NRLMSISE-00 and that of JB2008. It has shown that: (1) the bigger the density corrections are, the bigger the corrections to C_D of both types have. In addition, corrections to the physical C_D distribute within an extension of 0.05, which is about an order lower than the extension that the non-physical C_D distribute (0.5). (2) Corrections to the non-physical C_D behaved reciprocally to the density corrections, while a similar relationship is also existing between corrections to the physical C_D and that of the model density. (3) As the orbital altitude are lower than 200 km, corrections to the C_D and the model density are both decreased asymptotically to zero. Results in this study highlight that the physical C_D for spherical satellites should play an important role in technique renovations for accurate density corrections with the orbital decay data or in searching for a way to decouple the

  3. Towards a Self-Consistent Physical Framework for Modeling Coupled Human and Physical Activities during the Anthropocene

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2014-12-01

    Studies of the response of global climate to anthropogenic activities rely upon scenarios for future human activity to provide a range of possible trajectories for greenhouse gases emissions over the coming century. Sophisticated integrated models are used to explore not only what will happen, but what should happen in order to optimize societal well-being. Hundreds of equations might be used to account for the interplay between human decisions, technological change, and macroeconomic priniciples. In contrast, the model equations used to describe geophysical phenomena look very different because they are a) purely deterministic and b) consistent with basic thermodynamic laws. This inconsistency between macroeconomics and physics suggests a rather unhappy marriage. During the Anthropocene the evolution of humanity and our environment will become increasingly intertwined. Representing such a coupling suggests a need for a common theoretical basis. To this end, the approach that is described here is to treat civilization like any other physical process, that is as an open, non-equilibrium thermodynamic system that dissipates energy and diffuses matter in order to sustain existing circulations and to further its material growth. Theoretical arguments and over 40 years of measurements show that a very general representation of global economic wealth (not GDP) has been tied to rates of global primary energy consumption through a constant 7.1 ± 0.1 mW per year 2005 USD. This link between physics and economics leads to very simple expressions for how fast civilization and its rate of energy consumption grow. These are expressible as a function of rates of energy and material resource discovery and depletion, and of the magnitude of externally imposed decay. The equations are validated through hindcasts that show, for example, that economic conditions in the 1950s can be invoked to make remarkably accurate forecasts of present rates of global GDP growth and primary energy

  4. Development of a Mantle Convection Physical Model to Assist with Teaching about Earth's Interior Processes

    NASA Astrophysics Data System (ADS)

    Glesener, G. B.; Aurnou, J. M.

    2010-12-01

    The Modeling and Educational Demonstrations Laboratory (MEDL) at UCLA is developing a mantle convection physical model to assist educators with the pedagogy of Earth’s interior processes. Our design goal consists of two components to help the learner gain conceptual understanding by means of visual interactions without the burden of distracters, which may promote alternative conceptions. Distracters may be any feature of the conceptual model that causes the learner to use inadequate mental artifact to help him or her understand what the conceptual model is intended to convey. The first component, and most important, is a psychological component that links properties of “everyday things” (Norman, 1988) to the natural phenomenon, mantle convection. Some examples of everyday things may be heat rising out from a freshly popped bag of popcorn, or cold humid air falling from an open freezer. The second component is the scientific accuracy of the conceptual model. We would like to simplify the concepts for the learner without sacrificing key information that is linked to other natural phenomena the learner will come across in future science lessons. By taking into account the learner’s mental artifacts in combination with a simplified, but accurate, representation of what scientists know of the Earth’s interior, we expect the learner to have the ability to create an adequate qualitative mental simulation of mantle convection. We will be presenting some of our prototypes of this mantle convection physical model at this year’s poster session and invite constructive input from our colleagues.

  5. Validation of Material Models For Automotive Carbon Fiber Composite Structures Via Physical And Crash Testing (VMM Composites Project)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coppola, Anthony; Faruque, Omar; Truskin, James F

    As automotive fuel economy requirements increase, the push for reducing overall vehicle weight will likely include the consideration of materials that have not previously been part of mainstream vehicle design and manufacturing, including carbon fiber composites. Vehicle manufacturers currently rely on computer-aided engineering (CAE) methods as part of the design and development process, so going forward, the ability to accurately and predictably model carbon fiber composites will be necessary. If composites are to be used for structural components, this need applies to both, crash and quasi-static modeling. This final report covers the results of a five-year, $6.89M, 50% cost-shared researchmore » project between Department of Energy (DOE) and the US Advanced Materials Partnership (USAMP) under Cooperative Agreement DE-EE-0005661 known as “Validation of Material Models for Automotive Carbon Fiber Composite Structures Via Physical and Crash Testing (VMM).” The objective of the VMM Composites Project was to validate and assess the ability of physics-based material models to predict crash performance of automotive primary load-carrying carbon fiber composite structures. Simulation material models that were evaluated included micro-mechanics based meso-scale models developed by the University of Michigan (UM) and micro-plane models by Northwestern University (NWU) under previous collaborations with the DOE and Automotive Composites Consortium/USAMP, as well as five commercial crash codes: LS-DYNA, RADIOSS, VPS/PAM-CRASH, Abaqus, and GENOA-MCQ. CAE predictions obtained from seven organizations were compared with experimental results from quasi-static testing and dynamic crash testing of a thermoset carbon fiber composite front-bumper and crush-can (FBCC) system gathered under multiple loading conditions. This FBCC design was developed to demonstrate progressive crush, virtual simulation, tooling, fabrication, assembly, non-destructive evaluation and crash

  6. Ladder physics in the spin fermion model

    NASA Astrophysics Data System (ADS)

    Tsvelik, A. M.

    2017-05-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. It is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d -Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface. Hence, the SF model provides an adequate description of the pseudogap.

  7. Ladder physics in the spin fermion model

    DOE PAGES

    Tsvelik, A. M.

    2017-05-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. Here, it is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d-Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface.more » Hence, the SF model provides an adequate description of the pseudogap.« less

  8. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  9. Addressing Beyond Standard Model physics using cosmology

    NASA Astrophysics Data System (ADS)

    Ghalsasi, Akshay

    We have consensus models for both particle physics (i.e. standard model) and cosmology (i.e. LambdaCDM). Given certain assumptions about the initial conditions of the universe, the marriage of the standard model (SM) of particle physics and LambdaCDM cosmology has been phenomenally successful in describing the universe we live in. However it is quite clear that all is not well. The three biggest problems that the SM faces today are baryogenesis, dark matter and dark energy. These problems, along with the problem of neutrino masses, indicate the existence of physics beyond SM. Evidence of baryogenesis, dark matter and dark energy all comes from astrophysical and cosmological observations. Cosmology also provides the best (model dependent) constraints on neutrino masses. In this thesis I will try address the following problems 1) Addressing the origin of dark energy (DE) using non-standard neutrino cosmology and exploring the effects of the non-standard neutrino cosmology on terrestrial and cosmological experiments. 2) Addressing the matter anti-matter asymmetry of the universe.

  10. Testing a Theoretical Model of Immigration Transition and Physical Activity.

    PubMed

    Chang, Sun Ju; Im, Eun-Ok

    2015-01-01

    The purposes of the study were to develop a theoretical model to explain the relationships between immigration transition and midlife women's physical activity and test the relationships among the major variables of the model. A theoretical model, which was developed based on transitions theory and the midlife women's attitudes toward physical activity theory, consists of 4 major variables, including length of stay in the United States, country of birth, level of acculturation, and midlife women's physical activity. To test the theoretical model, a secondary analysis with data from 127 Hispanic women and 123 non-Hispanic (NH) Asian women in a national Internet study was used. Among the major variables of the model, length of stay in the United States was negatively associated with physical activity in Hispanic women. Level of acculturation in NH Asian women was positively correlated with women's physical activity. Country of birth and level of acculturation were significant factors that influenced physical activity in both Hispanic and NH Asian women. The findings support the theoretical model that was developed to examine relationships between immigration transition and physical activity; it shows that immigration transition can play an essential role in influencing health behaviors of immigrant populations in the United States. The NH theoretical model can be widely used in nursing practice and research that focus on immigrant women and their health behaviors. Health care providers need to consider the influences of immigration transition to promote immigrant women's physical activity.

  11. Teacher Fidelity to a Physical Education Curricular Model and Physical Activity Outcomes

    ERIC Educational Resources Information Center

    Stylianou, Michalis; Kloeppel, Tiffany; Kulinna, Pamela; van der Mars, Han

    2016-01-01

    Background: This study was informed by the bodies of literature emphasizing the role of physical education in promoting physical activity (PA) and addressing teacher fidelity to curricular models. Purpose: The purpose of this study was to compare student PA levels, lesson context, and teacher PA promotion behavior among classes where teachers were…

  12. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  13. Graded Interface Models for more accurate Determination of van der Waals-London Dispersion Interactions across Grain Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Benthem, Klaus; Tan, Guolong; French, Roger H

    2006-01-01

    Attractive van der Waals V London dispersion interactions between two half crystals arise from local physical property gradients within the interface layer separating the crystals. Hamaker coefficients and London dispersion energies were quantitatively determined for 5 and near- 13 grain boundaries in SrTiO3 by analysis of spatially resolved valence electron energy-loss spectroscopy (VEELS) data. From the experimental data, local complex dielectric functions were determined, from which optical properties can be locally analysed. Both local electronic structures and optical properties revealed gradients within the grain boundary cores of both investigated interfaces. The obtained results show that even in the presence ofmore » atomically structured grain boundary cores with widths of less than 1 nm, optical properties have to be represented with gradual changes across the grain boundary structures to quantitatively reproduce accurate van der Waals V London dispersion interactions. London dispersion energies of the order of 10% of the apparent interface energies of SrTiO3 were observed, demonstrating their significance in the grain boundary formation process. The application of different models to represent optical property gradients shows that long-range van der Waals V London dispersion interactions scale significantly with local, i.e atomic length scale property variations.« less

  14. Nicholas Metropolis Award Talk for Outstanding Doctoral Thesis Work in Computational Physics: Computational biophysics and multiscale modeling of blood cells and blood flow in health and disease

    NASA Astrophysics Data System (ADS)

    Fedosov, Dmitry

    2011-03-01

    Computational biophysics is a large and rapidly growing area of computational physics. In this talk, we will focus on a number of biophysical problems related to blood cells and blood flow in health and disease. Blood flow plays a fundamental role in a wide range of physiological processes and pathologies in the organism. To understand and, if necessary, manipulate the course of these processes it is essential to investigate blood flow under realistic conditions including deformability of blood cells, their interactions, and behavior in the complex microvascular network. Using a multiscale cell model we are able to accurately capture red blood cell mechanics, rheology, and dynamics in agreement with a number of single cell experiments. Further, this validated model yields accurate predictions of the blood rheological properties, cell migration, cell-free layer, and hemodynamic resistance in microvessels. In addition, we investigate blood related changes in malaria, which include a considerable stiffening of red blood cells and their cytoadherence to endothelium. For these biophysical problems computational modeling is able to provide new physical insights and capabilities for quantitative predictions of blood flow in health and disease.

  15. A physical model for dementia

    NASA Astrophysics Data System (ADS)

    Sotolongo-Costa, O.; Gaggero-Sager, L. M.; Becker, J. T.; Maestu, F.; Sotolongo-Grau, O.

    2017-04-01

    Aging associated brain decline often result in some kind of dementia. Even when this is a complex brain disorder a physical model can be used in order to describe its general behavior. A probabilistic model for the development of dementia is obtained and fitted to some experimental data obtained from the Alzheimer's Disease Neuroimaging Initiative. It is explained how dementia appears as a consequence of aging and why it is irreversible.

  16. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  17. Modelling urban rainfall-runoff responses using an experimental, two-tiered physical modelling environment

    NASA Astrophysics Data System (ADS)

    Green, Daniel; Pattison, Ian; Yu, Dapeng

    2016-04-01

    Surface water (pluvial) flooding occurs when rainwater from intense precipitation events is unable to infiltrate into the subsurface or drain via natural or artificial drainage channels. Surface water flooding poses a serious hazard to urban areas across the world, with the UK's perceived risk appearing to have increased in recent years due to surface water flood events seeming more severe and frequent. Surface water flood risk currently accounts for 1/3 of all UK flood risk, with approximately two million people living in urban areas at risk of a 1 in 200-year flood event. Research often focuses upon using numerical modelling techniques to understand the extent, depth and severity of actual or hypothetical flood scenarios. Although much research has been conducted using numerical modelling, field data available for model calibration and validation is limited due to the complexities associated with data collection in surface water flood conditions. Ultimately, the data which numerical models are based upon is often erroneous and inconclusive. Physical models offer a novel, alternative and innovative environment to collect data within, creating a controlled, closed system where independent variables can be altered independently to investigate cause and effect relationships. A physical modelling environment provides a suitable platform to investigate rainfall-runoff processes occurring within an urban catchment. Despite this, physical modelling approaches are seldom used in surface water flooding research. Scaled laboratory experiments using a 9m2, two-tiered 1:100 physical model consisting of: (i) a low-cost rainfall simulator component able to simulate consistent, uniformly distributed (>75% CUC) rainfall events of varying intensity, and; (ii) a fully interchangeable, modular plot surface have been conducted to investigate and quantify the influence of a number of terrestrial and meteorological factors on overland flow and rainfall-runoff patterns within a modelled

  18. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    NASA Technical Reports Server (NTRS)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  19. Simple universal models capture all classical spin physics.

    PubMed

    De las Cuevas, Gemma; Cubitt, Toby S

    2016-03-11

    Spin models are used in many studies of complex systems because they exhibit rich macroscopic behavior despite their microscopic simplicity. Here, we prove that all the physics of every classical spin model is reproduced in the low-energy sector of certain "universal models," with at most polynomial overhead. This holds for classical models with discrete or continuous degrees of freedom. We prove necessary and sufficient conditions for a spin model to be universal and show that one of the simplest and most widely studied spin models, the two-dimensional Ising model with fields, is universal. Our results may facilitate physical simulations of Hamiltonians with complex interactions. Copyright © 2016, American Association for the Advancement of Science.

  20. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  1. A unified framework for mesh refinement in random and physical space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jing; Stinis, Panos

    In recent work we have shown how an accurate reduced model can be utilized to perform mesh renement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for renement in both randommore » and physical space. In this manuscript we focus on the application to random space mesh renement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the effciency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.« less

  2. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite

  3. Establishment of a Physical Model for Solute Diffusion in Hydrogel: Understanding the Diffusion of Proteins in Poly(sulfobetaine methacrylate) Hydrogel.

    PubMed

    Zhou, Yuhang; Li, Junjie; Zhang, Ying; Dong, Dianyu; Zhang, Ershuai; Ji, Feng; Qin, Zhihui; Yang, Jun; Yao, Fanglian

    2017-02-02

    Prediction of the diffusion coefficient of solute, especially bioactive molecules, in hydrogel is significant in the biomedical field. Considering the randomness of solute movement in a hydrogel network, a physical diffusion RMP-1 model based on obstruction theory was established in this study. The physical properties of the solute and the polymer chain and their interactions were introduced into this model. Furthermore, models RMP-2 and RMP-3 were established to understand and predict the diffusion behaviors of proteins in hydrogel. In addition, zwitterionic poly(sulfobetaine methacrylate) (PSBMA) hydrogels with wide range and fine adjustable mesh sizes were prepared and used as efficient experimental platforms for model validation. The Flory characteristic ratios, Flory-Huggins parameter, mesh size, and polymer chain radii of PSBMA hydrogels were determined. The diffusion coefficients of the proteins (bovine serum albumin, immunoglobulin G, and lysozyme) in PSBMA hydrogels were studied by the fluorescence recovery after photobleaching technique. The measured diffusion coefficients were compared with the predictions of obstruction models, and it was found that our model presented an excellent predictive ability. Furthermore, the assessment of our model revealed that protein diffusion in PSBMA hydrogel would be affected by the physical properties of the protein and the PSBMA network. It was also confirmed that the diffusion behaviors of protein in zwitterionic hydrogels can be adjusted by changing the cross-linking density of the hydrogel and the ionic strength of the swelling medium. Our model is expected to possess accurate predictive ability for the diffusion coefficient of solute in hydrogel, which will be widely used in the biomedical field.

  4. Service Learning In Physics: The Consultant Model

    NASA Astrophysics Data System (ADS)

    Guerra, David

    2005-04-01

    Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.

  5. Modelling of the Thermo-Physical and Physical Properties for Solidification of Al-Alloys

    NASA Astrophysics Data System (ADS)

    Saunders, N.; Li, X.; Miodownik, A. P.; Schillé, J.-P.

    The thermo-physical and physical properties of the liquid and solid phases are critical components in casting simulations. Such properties include the fraction solid transformed, enthalpy release, thermal conductivity, volume and density, all as a function of temperature. Due to the difficulty in experimentally determining such properties at solidification temperatures, little information exists for multi-component alloys. As part of the development of a new computer program for modelling of materials properties (JMatPro) extensive work has been carried out on the development of sound, physically based models for these properties. Wide ranging results will presented for Al-based alloys, which will include more detailed information concerning the density change of the liquid that intrinsically occurs during solidification due to its change in composition.

  6. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  7. Toward a mineral physics reference model for the Moon’s core

    PubMed Central

    Antonangeli, Daniele; Morard, Guillaume; Schmerr, Nicholas C.; Komabayashi, Tetsuya; Krisch, Michael; Fiquet, Guillaume; Fei, Yingwei

    2015-01-01

    The physical properties of iron (Fe) at high pressure and high temperature are crucial for understanding the chemical composition, evolution, and dynamics of planetary interiors. Indeed, the inner structures of the telluric planets all share a similar layered nature: a central metallic core composed mostly of iron, surrounded by a silicate mantle, and a thin, chemically differentiated crust. To date, most studies of iron have focused on the hexagonal closed packed (hcp, or ε) phase, as ε-Fe is likely stable across the pressure and temperature conditions of Earth’s core. However, at the more moderate pressures characteristic of the cores of smaller planetary bodies, such as the Moon, Mercury, or Mars, iron takes on a face-centered cubic (fcc, or γ) structure. Here we present compressional and shear wave sound velocity and density measurements of γ-Fe at high pressures and high temperatures, which are needed to develop accurate seismic models of planetary interiors. Our results indicate that the seismic velocities proposed for the Moon’s inner core by a recent reanalysis of Apollo seismic data are well below those of γ-Fe. Our dataset thus provides strong constraints to seismic models of the lunar core and cores of small telluric planets. This allows us to propose a direct compositional and velocity model for the Moon’s core. PMID:25775531

  8. Are Physical Education Majors Models for Fitness?

    ERIC Educational Resources Information Center

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  9. Searching for Physics Beyond the Standard Model and Beyond

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohammad

    The hierarchy problem, convolved with the various known puzzles in particle physics, grants us a great outlook of new physics soon to be discovered. We present multiple approaches to searching for physics beyond the standard model. First, two models with a minimal amount of theoretical guidance are analyzed using existing or simulated LHC data. Then, an extension of the Minimal Supersymmetric Standard Model (MSSM) is studied with an emphasis on the cosmological implications as well as the current and future sensitivity of colliders, direct detection and indirect detection experiments. Finally, a more complete model of the MSSM is presented through which we attempt to resolve tension with observations within the context of gauge mediated supersymmetry breaking.

  10. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  11. Accurate and efficient modeling of the detector response in small animal multi-head PET systems.

    PubMed

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-07

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction

  12. Accurate and efficient modeling of the detector response in small animal multi-head PET systems

    NASA Astrophysics Data System (ADS)

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-01

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction

  13. Physical and mathematical cochlear models

    NASA Astrophysics Data System (ADS)

    Lim, Kian-Meng

    2000-10-01

    The cochlea is an intricate organ in the inner ear responsible for our hearing. Besides acting as a transducer to convert mechanical sound vibrations to electrical neural signals, the cochlea also amplifies and separates the sound signal into its spectral components for further processing in the brain. It operates over a broad-band of frequency and a huge dynamic range of input while maintaining a low power consumption. The present research takes the approach of building cochlear models to study and understand the underlying mechanics involved in the functioning of the cochlea. Both physical and mathematical models of the cochlea are constructed. The physical model is a first attempt to build a life- sized replica of the human cochlea using advanced micro- machining techniques. The model takes a modular design, with a removable silicon-wafer based partition membrane encapsulated in a plastic fluid chamber. Preliminary measurements in the model are obtained and they compare roughly with simulation results. Parametric studies on the design parameters of the model leads to an improved design of the model. The studies also revealed that the width and orthotropy of the basilar membrane in the cochlea have significant effects on the sharply tuned responses observed in the biological cochlea. The mathematical model is a physiologically based model that includes three-dimensional viscous fluid flow and a tapered partition with variable properties along its length. A hybrid asymptotic and numerical method provides a uniformly valid and efficient solution to the short and long wave regions in the model. Both linear and non- linear activity are included in the model to simulate the active cochlea. The mathematical model has successfully reproduced many features of the response in the biological cochlea, as observed in experiment measurements performed on animals. These features include sharply tuned frequency responses, significant amplification with inclusion of activity

  14. A model teaching session for the hypothesis-driven physical examination.

    PubMed

    Nishigori, Hiroshi; Masuda, Kozo; Kikukawa, Makoto; Kawashima, Atsushi; Yudkowsky, Rachel; Bordage, Georges; Otaki, Junji

    2011-01-01

    The physical examination is an essential clinical competence for all physicians. Most medical schools have students who learn the physical examination maneuvers using a head-to-toe approach. However, this promotes a rote approach to the physical exam, and it is not uncommon for students later on to fail to appreciate the meaning of abnormal findings and their contribution to the diagnostic reasoning process. The purpose of the project was to develop a model teaching session for the hypothesis-driven physical examination (HDPE) approach in which students could practice the physical examination in the context of diagnostic reasoning. We used an action research methodology to create this HDPE model by developing a teaching session, implementing it over 100 times with approximately 700 students, conducting internal reflection and external evaluations, and making adjustments as needed. A model nine-step HDPE teaching session was developed, including: (1) orientation, (2) anticipation, (3) preparation, (4) role play, (5) discussion-1, (6) answers, (7) discussion-2, (8) demonstration and (9) reflection. A structured model HDPE teaching session and tutor guide were developed into a workable instructional intervention. Faculty members are invited to teach the physical examination using this model.

  15. Physical Orbit for λ Virginis and a Test of Stellar Evolution Models

    NASA Astrophysics Data System (ADS)

    Zhao, M.; Monnier, J. D.; Torres, G.; Boden, A. F.; Claret, A.; Millan-Gabet, R.; Pedretti, E.; Berger, J.-P.; Traub, W. A.; Schloerb, F. P.; Carleton, N. P.; Kern, P.; Lacasse, M. G.; Malbet, F.; Perraut, K.

    2007-04-01

    The star λ Virginis is a well-known double-lined spectroscopic Am binary with the interesting property that both stars are very similar in abundance but one is sharp-lined and the other is broad-lined. We present combined interferometric and spectroscopic studies of λ Vir. The small scale of the λ Vir orbit (~20 mas) is well resolved by the Infrared Optical Telescope Array (IOTA), allowing us to determine its elements, as well as the physical properties of the components, to high accuracy. The masses of the two stars are determined to be 1.897 and 1.721 Msolar, with 0.7% and 1.5% errors, respectively, and the two stars are found to have the same temperature of 8280+/-200 K. The accurately determined properties of λ Vir allow comparisons between observations and current stellar evolution models, and reasonable matches are found. The best-fit stellar model gives λ Vir a subsolar metallicity of Z=0.0097 and an age of 935 Myr. The orbital and physical parameters of λ Vir also allow us to study its tidal evolution timescales and status. Although atomic diffusion is currently considered to be the most plausible cause of the Am phenomenon, the issue is still being actively debated in the literature. With the present study of the properties and evolutionary status of λ Vir, this system is an ideal candidate for further detailed abundance analyses that might shed more light on the source of the chemical anomalies in these A stars.

  16. A physically based compact I-V model for monolayer TMDC channel MOSFET and DMFET biosensor.

    PubMed

    Rahman, Ehsanur; Shadman, Abir; Ahmed, Imtiaz; Khan, Saeed Uz Zaman; Khosru, Quazi D M

    2018-06-08

    In this work, a compact transport model has been developed for monolayer transition metal dichalcogenide (TMDC) channel MOSFET. The analytical model solves the Poisson's equation for the inversion charge density to get the electrostatic potential in the channel. Current is then calculated by solving the drift-diffusion equation. The model makes gradual channel approximation to simplify the solution procedure. The appropriate density of states obtained from the first principle density functional theory simulation has been considered to keep the model physically accurate for monolayer TMDC channel FET. The outcome of the model has been benchmarked against both experimental and numerical quantum simulation results with the help of a few fitting parameters. Using the compact model, detailed output and transfer characteristics of monolayer WSe 2 FET have been studied, and various performance parameters have been determined. The study confirms excellent ON and OFF state performances of monolayer WSe 2 FET which could be viable for the next generation high-speed, low power applications. Also, the proposed model has been extended to study the operation of a biosensor. A monolayer MoS 2 channel based dielectric modulated FET is investigated using the compact model for detection of a biomolecule in a dry environment.

  17. A physically based compact I–V model for monolayer TMDC channel MOSFET and DMFET biosensor

    NASA Astrophysics Data System (ADS)

    Rahman, Ehsanur; Shadman, Abir; Ahmed, Imtiaz; Zaman Khan, Saeed Uz; Khosru, Quazi D. M.

    2018-06-01

    In this work, a compact transport model has been developed for monolayer transition metal dichalcogenide (TMDC) channel MOSFET. The analytical model solves the Poisson’s equation for the inversion charge density to get the electrostatic potential in the channel. Current is then calculated by solving the drift–diffusion equation. The model makes gradual channel approximation to simplify the solution procedure. The appropriate density of states obtained from the first principle density functional theory simulation has been considered to keep the model physically accurate for monolayer TMDC channel FET. The outcome of the model has been benchmarked against both experimental and numerical quantum simulation results with the help of a few fitting parameters. Using the compact model, detailed output and transfer characteristics of monolayer WSe2 FET have been studied, and various performance parameters have been determined. The study confirms excellent ON and OFF state performances of monolayer WSe2 FET which could be viable for the next generation high-speed, low power applications. Also, the proposed model has been extended to study the operation of a biosensor. A monolayer MoS2 channel based dielectric modulated FET is investigated using the compact model for detection of a biomolecule in a dry environment.

  18. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    PubMed

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  20. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  1. The Role of Various Curriculum Models on Physical Activity Levels

    ERIC Educational Resources Information Center

    Culpepper, Dean O.; Tarr, Susan J.; Killion, Lorraine E.

    2011-01-01

    Researchers have suggested that physical education curricula can be highly effective in increasing physical activity levels at school (Sallis & Owen, 1999). The purpose of this study was to investigate the impact of various curriculum models on physical activity. Total steps were measured on 1,111 subjects and three curriculum models were studied…

  2. Technical Manual for the SAM Physical Trough Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, M. J.; Gilman, P.

    2011-06-01

    NREL, in conjunction with Sandia National Lab and the U.S Department of Energy, developed the System Advisor Model (SAM) analysis tool for renewable energy system performance and economic analysis. This paper documents the technical background and engineering formulation for one of SAM's two parabolic trough system models in SAM. The Physical Trough model calculates performance relationships based on physical first principles where possible, allowing the modeler to predict electricity production for a wider range of component geometries than is possible in the Empirical Trough model. This document describes the major parabolic trough plant subsystems in detail including the solar field,more » power block, thermal storage, piping, auxiliary heating, and control systems. This model makes use of both existing subsystem performance modeling approaches, and new approaches developed specifically for SAM.« less

  3. Subthreshold SPICE Model Optimization

    NASA Astrophysics Data System (ADS)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  4. An unexpected way forward: towards a more accurate and rigorous protein-protein binding affinity scoring function by eliminating terms from an already simple scoring function.

    PubMed

    Swanson, Jon; Audie, Joseph

    2018-01-01

    A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.

  5. Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling

    NASA Astrophysics Data System (ADS)

    Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.

    2011-12-01

    Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.

  6. Developments toward more accurate molecular modeling of liquids

    NASA Astrophysics Data System (ADS)

    Evans, Tom J.

    2000-12-01

    The general goal of this research has been to improve upon existing combined quantum mechanics/molecular mechanics (QM/MM) methodologies. Error weighting functions have been introduced into the perturbative Monte Carlo (PMC) method for use with QM/MM. The PMC approach, introduced earlier, provides a means to reduce the number of full self-consistent field (SCF) calculations in simulations using the QM/MM potential by evoking perturbation theory to calculate energy changes due to displacements of a MM molecule. This will allow the ab initio QM/MM approach to be applied to systems that require more advanced, computationally demanding treatments of the QM and/or MM regions. Efforts have also been made to improve the accuracy of the representation of the solvent molecules usually represented by MM force fields. Results from an investigation of the applicability of the embedded density functional theory (EDFT) for studying physical properties of solutions will be presented. In this approach, the solute wavefunction is solved self- consistently in the field of individually frozen electron-density solvent molecules. To test its accuracy, the potential curves for interactions between Li+, Cl- and H2O with a single frozen-density H 2O molecule in different orientations have been calculated. With the development of the more sophisticated effective fragment potential (EFP) representation of solvent molecules, a QM/EFP technique was created. This hybrid QM/EFP approach was used to investigate the solvation of Li + by small clusters of water, as a test case for larger ionic dusters. The EFP appears to provide an accurate representation of the strong interactions that exist between Li+ and H2O. With the QM/EFP methodology comes an increased computational expense, resulting in an even greater need to rely on the PMC approach. However, while including the PMC into the hybrid QM/EFP technique, it was discovered that the previous implementation of the PMC was done incorrectly

  7. Lung ultrasound accurately detects pneumothorax in a preterm newborn lamb model.

    PubMed

    Blank, Douglas A; Hooper, Stuart B; Binder-Heschl, Corinna; Kluckow, Martin; Gill, Andrew W; LaRosa, Domenic A; Inocencio, Ishmael M; Moxham, Alison; Rodgers, Karyn; Zahra, Valerie A; Davis, Peter G; Polglase, Graeme R

    2016-06-01

    Pneumothorax is a common emergency affecting extremely preterm. In adult studies, lung ultrasound has performed better than chest x-ray in the diagnosis of pneumothorax. The purpose of this study was to determine the efficacy of lung ultrasound (LUS) examination to detect pneumothorax using a preterm animal model. This was a prospective, observational study using newborn Border-Leicester lambs at gestational age = 126 days (equivalent to gestational age = 26 weeks in humans) receiving mechanical ventilation from birth to 2 h of life. At the conclusion of the experiment, LUS was performed, the lambs were then euthanised and a post-mortem exam was immediately performed. We used previously published ultrasound techniques to identify pneumothorax. Test characteristics of LUS to detect pneumothorax were calculated, using the post-mortem exam as the 'gold standard' test. Nine lambs (18 lungs) were examined. Four lambs had a unilateral pneumothorax, all of which were identified by LUS with no false positives. This was the first study to use post-mortem findings to test the efficacy of LUS to detect pneumothorax in a newborn animal model. Lung ultrasound accurately detected pneumothorax, verified by post-mortem exam, in premature, newborn lambs. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  8. Investigation of model-based physical design restrictions (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl

    2005-05-01

    As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.

  9. Physical plausibility of cold star models satisfying Karmarkar conditions

    NASA Astrophysics Data System (ADS)

    Fuloria, Pratibha; Pant, Neeraj

    2017-11-01

    In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9 . The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models.

  10. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    NASA Astrophysics Data System (ADS)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  11. Spin-foam models and the physical scalar product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alesci, Emanuele; Centre de Physique Theorique de Luminy, Universite de la Mediterranee, F-13288 Marseille; Noui, Karim

    2008-11-15

    This paper aims at clarifying the link between loop quantum gravity and spin-foam models in four dimensions. Starting from the canonical framework, we construct an operator P acting on the space of cylindrical functions Cyl({gamma}), where {gamma} is the four-simplex graph, such that its matrix elements are, up to some normalization factors, the vertex amplitude of spin-foam models. The spin-foam models we are considering are the topological model, the Barrett-Crane model, and the Engle-Pereira-Rovelli model. If one of these spin-foam models provides a covariant quantization of gravity, then the associated operator P should be the so-called ''projector'' into physical statesmore » and its matrix elements should give the physical scalar product. We discuss the possibility to extend the action of P to any cylindrical functions on the space manifold.« less

  12. The limitations of mathematical modeling in high school physics education

    NASA Astrophysics Data System (ADS)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems

  13. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  14. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    NASA Astrophysics Data System (ADS)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  15. Hindered rotor models with variable kinetic functions for accurate thermodynamic and kinetic predictions

    NASA Astrophysics Data System (ADS)

    Reinisch, Guillaume; Leyssale, Jean-Marc; Vignoles, Gérard L.

    2010-10-01

    We present an extension of some popular hindered rotor (HR) models, namely, the one-dimensional HR (1DHR) and the degenerated two-dimensional HR (d2DHR) models, allowing for a simple and accurate treatment of internal rotations. This extension, based on the use of a variable kinetic function in the Hamiltonian instead of a constant reduced moment of inertia, is extremely suitable in the case of rocking/wagging motions involved in dissociation or atom transfer reactions. The variable kinetic function is first introduced in the framework of a classical 1DHR model. Then, an effective temperature and potential dependent constant is proposed in the cases of quantum 1DHR and classical d2DHR models. These methods are finally applied to the atom transfer reaction SiCl3+BCl3→SiCl4+BCl2. We show, for this particular case, that a proper accounting of internal rotations greatly improves the accuracy of thermodynamic and kinetic predictions. Moreover, our results confirm (i) that using a suitably defined kinetic function appears to be very adapted to such problems; (ii) that the separability assumption of independent rotations seems justified; and (iii) that a quantum mechanical treatment is not a substantial improvement with respect to a classical one.

  16. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. Moremore » recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.« less

  17. Accurate Treatment of Collisions and Water-Delivery in Models of Terrestrial Planet Formation

    NASA Astrophysics Data System (ADS)

    Haghighipour, Nader; Maindl, Thomas; Schaefer, Christoph

    2017-10-01

    It is widely accepted that collisions among solid bodies, ignited by their interactions with planetary embryos is the key process in the formation of terrestrial planets and transport of volatiles and chemical compounds to their accretion zones. Unfortunately, due to computational complexities, these collisions are often treated in a rudimentary way. Impacts are considered to be perfectly inelastic and volatiles are considered to be fully transferred from one object to the other. This perfect-merging assumption has profound effects on the mass and composition of final planetary bodies as it grossly overestimates the masses of these objects and the amounts of volatiles and chemical elements transferred to them. It also entirely neglects collisional-loss of volatiles (e.g., water) and draws an unrealistic connection between these properties and the chemical structure of the protoplanetary disk (i.e., the location of their original carriers). We have developed a new and comprehensive methodology to simulate growth of embryos to planetary bodies where we use a combination of SPH and N-body codes to accurately model collisions as well as the transport/transfer of chemical compounds. Our methodology accounts for the loss of volatiles (e.g., ice sublimation) during the orbital evolution of their careers and accurately tracks their transfer from one body to another. Results of our simulations show that traditional N-body modeling of terrestrial planet formation overestimates the amount of the mass and water contents of the final planets by over 60% implying that not only the amount of water they suggest is far from being realistic, small planets such as Mars can also form in these simulations when collisions are treated properly. We will present details of our methodology and discuss its implications for terrestrial planet formation and water delivery to Earth.

  18. Modeling Instruction in AP Physics C: Mechanics and Electricity and Magnetism

    NASA Astrophysics Data System (ADS)

    Belcher, Nathan Tillman

    This action research study used data from multiple assessments in Mechanics and Electricity and Magnetism to determine the viability of Modeling Instruction as a pedagogy for students in AP Physics C: Mechanics and Electricity and Magnetism. Modeling Instruction is a guided-inquiry approach to teaching science in which students progress through the Modeling Cycle to develop a fully-constructed model for a scientific concept. AP Physics C: Mechanics and Electricity and Magnetism are calculus-based physics courses, approximately equivalent to first-year calculus-based physics courses at the collegiate level. Using a one-group pretest-posttest design, students were assessed in Mechanics using the Force Concept Inventory, Mechanics Baseline Test, and 2015 AP Physics C: Mechanics Practice Exam. With the same design, students were assessed in Electricity and Magnetism on the Brief Electricity and Magnetism Assessment, Electricity and Magnetism Conceptual Assessment, and 2015 AP Physics C: Electricity and Magnetism Practice Exam. In a one-shot case study design, student scores were collected from the 2017 AP Physics C: Mechanics and Electricity and Magnetism Exams. Students performed moderately well on the assessments in Mechanics and Electricity and Magnetism, demonstrating that Modeling Instruction is a viable pedagogy in AP Physics C: Electricity and Magnetism.

  19. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  20. Material model for physically based rendering

    NASA Astrophysics Data System (ADS)

    Robart, Mathieu; Paulin, Mathias; Caubet, Rene

    1999-09-01

    In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.

  1. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  2. Evaluating nuclear physics inputs in core-collapse supernova models

    NASA Astrophysics Data System (ADS)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  3. Towards Improved High-Resolution Land Surface Hydrologic Reanalysis Using a Physically-Based Hydrologic Model and Data Assimilation

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Davis, K. J.; Zhang, F.; Duffy, C.; Yu, X.

    2014-12-01

    A coupled physically based land surface hydrologic model, Flux-PIHM, has been developed by incorporating a land surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Flux-PIHM has been implemented and manually calibrated at the Shale Hills watershed (0.08 km2) in central Pennsylvania. Model predictions of discharge, point soil moisture, point water table depth, sensible and latent heat fluxes, and soil temperature show good agreement with observations. When calibrated only using discharge, and soil moisture and water table depth at one point, Flux-PIHM is able to resolve the observed 101 m scale soil moisture pattern at the Shale Hills watershed when an appropriate map of soil hydraulic properties is provided. A Flux-PIHM data assimilation system has been developed by incorporating EnKF for model parameter and state estimation. Both synthetic and real data assimilation experiments have been performed at the Shale Hills watershed. Synthetic experiment results show that the data assimilation system is able to simultaneously provide accurate estimates of multiple parameters. In the real data experiment, the EnKF estimated parameters and manually calibrated parameters yield similar model performances, but the EnKF method significantly decreases the time and labor required for calibration. The data requirements for accurate Flux-PIHM parameter estimation via data assimilation using synthetic observations have been tested. Results show that by assimilating only in situ outlet discharge, soil water content at one point, and the land surface temperature averaged over the whole watershed, the data assimilation system can provide an accurate representation of watershed hydrology. Observations of these key variables are available with national and even global spatial coverage (e.g., MODIS surface temperature, SMAP soil moisture, and the USGS gauging stations). National atmospheric reanalysis

  4. Sensitivity of mineral dissolution rates to physical weathering : A modeling approach

    NASA Astrophysics Data System (ADS)

    Opolot, Emmanuel; Finke, Peter

    2015-04-01

    There is continued interest on accurate estimation of natural weathering rates owing to their importance in soil formation, nutrient cycling, estimation of acidification in soils, rivers and lakes, and in understanding the role of silicate weathering in carbon sequestration. At the same time a challenge does exist to reconcile discrepancies between laboratory-determined weathering rates and natural weathering rates. Studies have consistently reported laboratory rates to be in orders of magnitude faster than the natural weathering rates (White, 2009). These discrepancies have mainly been attributed to (i) changes in fluid composition (ii) changes in primary mineral surfaces (reactive sites) and (iii) the formation of secondary phases; that could slow natural weathering rates. It is indeed difficult to measure the interactive effect of the intrinsic factors (e.g. mineral composition, surface area) and extrinsic factors (e.g. solution composition, climate, bioturbation) occurring at the natural setting, in the laboratory experiments. A modeling approach could be useful in this case. A number of geochemical models (e.g. PHREEQC, EQ3/EQ6) already exist and are capable of estimating mineral dissolution / precipitation rates as a function of time and mineral mass. However most of these approaches assume a constant surface area in a given volume of water (White, 2009). This assumption may become invalid especially at long time scales. One of the widely used weathering models is the PROFILE model (Sverdrup and Warfvinge, 1993). The PROFILE model takes into account the mineral composition, solution composition and surface area in determining dissolution / precipitation rates. However there is less coupling with other processes (e.g. physical weathering, clay migration, bioturbation) which could directly or indirectly influence dissolution / precipitation rates. We propose in this study a coupling between chemical weathering mechanism (defined as a function of reactive area

  5. Anatomically accurate individual face modeling.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2003-01-01

    This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction.

  6. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  7. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  8. Physical Examination of Knee Ligament Injuries.

    PubMed

    Bronstein, Robert D; Schaffer, Joseph C

    2017-04-01

    The knee is one of the most commonly injured joints in the body. A thorough history and physical examination of the knee facilitates accurate diagnosis of ligament injury. Several examination techniques for the knee ligaments that were developed before advanced imaging remain as accurate or more accurate than these newer imaging modalities. Proper use of these examination techniques requires an understanding of the anatomy and pathophysiology of knee ligament injuries. Advanced imaging can be used to augment a history and examination when necessary, but should not replace a thorough history and physical examination.

  9. A Model of Physical Performance for Occupational Tasks.

    ERIC Educational Resources Information Center

    Hogan, Joyce

    This report acknowledges the problems faced by industrial/organizational psychologists who must make personnel decisions involving physically demanding jobs. The scarcity of criterion-related validation studies and the difficulty of generalizing validity are considered, and a model of physical performance that builds on Fleishman's (1984)…

  10. Rock physics model-based prediction of shear wave velocity in the Barnett Shale formation

    NASA Astrophysics Data System (ADS)

    Guo, Zhiqi; Li, Xiang-Yang

    2015-06-01

    Predicting S-wave velocity is important for reservoir characterization and fluid identification in unconventional resources. A rock physics model-based method is developed for estimating pore aspect ratio and predicting shear wave velocity Vs from the information of P-wave velocity, porosity and mineralogy in a borehole. Statistical distribution of pore geometry is considered in the rock physics models. In the application to the Barnett formation, we compare the high frequency self-consistent approximation (SCA) method that corresponds to isolated pore spaces, and the low frequency SCA-Gassmann method that describes well-connected pore spaces. Inversion results indicate that compared to the surroundings, the Barnett Shale shows less fluctuation in the pore aspect ratio in spite of complex constituents in the shale. The high frequency method provides a more robust and accurate prediction of Vs for all the three intervals in the Barnett formation, while the low frequency method collapses for the Barnett Shale interval. Possible causes for this discrepancy can be explained by the fact that poor in situ pore connectivity and low permeability make well-log sonic frequencies act as high frequencies and thus invalidate the low frequency assumption of the Gassmann theory. In comparison, for the overlying Marble Falls and underlying Ellenburger carbonates, both the high and low frequency methods predict Vs with reasonable accuracy, which may reveal that sonic frequencies are within the transition frequencies zone due to higher pore connectivity in the surroundings.

  11. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  12. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  13. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  14. Physical-Socio-Economic Modeling of Climate Change

    NASA Astrophysics Data System (ADS)

    Chamberlain, R. G.; Vatan, F.

    2008-12-01

    Because of the global nature of climate change, any assessment of the effects of plans, policies, and response to climate change demands a model that encompasses the entire Earth System, including socio- economic factors. Physics-based climate models of the factors that drive global temperatures, rainfall patterns, and sea level are necessary but not sufficient to guide decision making. Actions taken by farmers, industrialists, environmentalists, politicians, and other policy makers may result in large changes to economic factors, international relations, food production, disease vectors, and beyond. These consequences will not be felt uniformly around the globe or even across a given region. Policy models must comprehend all of these considerations. Combining physics-based models of the Earth's climate and biosphere with societal models of population dynamics, economics, and politics is a grand challenge with high stakes. We propose to leverage our recent advances in modeling and simulation of military stability and reconstruction operations to models that address all these areas of concern. Following over twenty years' experience of successful combat simulation, JPL has started developing Minerva, which will add demographic, economic, political, and media/information models to capabilities that already exist. With these new models, for which we have design concepts, it will be possible to address a very wide range of potential national and international problems that were previously inaccessible. Our climate change model builds on Minerva and expands the geographical horizon from playboxes containing regions and neighborhoods to the entire globe. This system consists of a collection of interacting simulation models that specialize in different aspects of the global situation. They will each contribute to and draw from a pool of shared data. The basic models are: the physical model; the demographic model; the political model; the economic model; and the media

  15. Physically representative atomistic modeling of atomic-scale friction

    NASA Astrophysics Data System (ADS)

    Dong, Yalin

    interesting physical process is buried between the two contact interfaces, thus makes a direct measurement more difficult. Atomistic simulation is able to simulate the process with the dynamic information of each single atom, and therefore provides valuable interpretations for experiments. In this, we will systematically to apply Molecular Dynamics (MD) simulation to optimally model the Atomic Force Microscopy (AFM) measurement of atomic friction. Furthermore, we also employed molecular dynamics simulation to correlate the atomic dynamics with the friction behavior observed in experiments. For instance, ParRep dynamics (an accelerated molecular dynamic technique) is introduced to investigate velocity dependence of atomic friction; we also employ MD simulation to "see" how the reconstruction of gold surface modulates the friction, and the friction enhancement mechanism at a graphite step edge. Atomic stick-slip friction can be treated as a rate process. Instead of running a direction simulation of the process, we can apply transition state theory to predict its property. We will have a rigorous derivation of velocity and temperature dependence of friction based on the Prandtl-Tomlinson model as well as transition theory. A more accurate relation to prediction velocity and temperature dependence is obtained. Furthermore, we have included instrumental noise inherent in AFM measurement to interpret two discoveries in experiments, suppression of friction at low temperature and the attempt frequency discrepancy between AFM measurement and theoretical prediction. We also discuss the possibility to treat wear as a rate process.

  16. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less

  17. Intentional Development: A Model to Guide Lifelong Physical Activity

    ERIC Educational Resources Information Center

    Cherubini, Jeffrey M.

    2009-01-01

    Framed in the context of researching influences on physical activity and actually working with individuals and groups seeking to initiate, increase or maintain physical activity, the purpose of this review is to present the model of Intentional Development as a multi-theoretical approach to guide research and applied work in physical activity.…

  18. Physically-Based Modelling and Real-Time Simulation of Fluids.

    NASA Astrophysics Data System (ADS)

    Chen, Jim Xiong

    1995-01-01

    Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.

  19. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. Reformulation of Nonlinear Anisotropic Crystal Elastoplasticity for Impact Physics

    DTIC Science & Technology

    2015-03-01

    interest include metals, ceramics , minerals, and energetic materials . Accurate, efficient, stable, and thermodynamically consistent models for...Clayton JD. Phase field theory and analysis of pressure-shear induced amorphization and failure in boron carbide ceramic . AIMS Materials Science. 2014;1...of Nonlinear Anisotropic Crystal Elastoplasticity for Impact Physics by JD Clayton Weapons and Materials Research Directorate, ARL

  2. Model-based reasoning in the physics laboratory: Framework and initial results

    NASA Astrophysics Data System (ADS)

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to refine the framework and demonstrate its utility by documenting examples of model-based reasoning in the laboratory. When applied to the think-aloud interviews, the framework captures and differentiates students' model-based reasoning and helps identify areas of future research. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of model limitations, and revision. Finally, we document students' challenges in explicitly articulating assumptions when constructing models of experimental systems and further challenges in model construction due to students' insufficient prior conceptual understanding. A modeling perspective reframes many of the seemingly arbitrary technical details of measurement tools and apparatus as an opportunity for authentic and engaging scientific sense making.

  3. Tactical Games Model and Its Effects on Student Physical Activity and Gameplay Performance in Secondary Physical Education

    ERIC Educational Resources Information Center

    Hodges, Michael; Wicke, Jason; Flores-Marti, Ismael

    2018-01-01

    Many have examined game-based instructional models, though few have examined the effects of the Tactical Games Model (TGM) on secondary-aged students. Therefore, this study examined the effects TGM has on secondary students' physical activity (PA) and gameplay performance (GPP) in three secondary schools. Physical education teachers (N = 3) were…

  4. [Students' physical activity: an analysis according to Pender's health promotion model].

    PubMed

    Guedes, Nirla Gomes; Moreira, Rafaella Pessoa; Cavalcante, Tahissa Frota; de Araujo, Thelma Leite; Ximenes, Lorena Barbosa

    2009-12-01

    The objective of this study was to describe the everyday physical activity habits of students and analyze the practice of physical activity and its determinants, based on the first component of Pender's health promotion model. This cross-sectional study was performed from 2004 to 2005 with 79 students in a public school in Fortaleza, Ceará, Brazil. Data collection was performed by interviews and physical examinations. The data were analyzed according to the referred theoretical model. Most students (n=60) were physically active. Proportionally, adolescents were the most active (80.4%). Those with a sedentary lifestyle had higher rates for overweight and obesity (21.1%). Many students practiced outdoor physical activities, which did not require any physical structure and good financial conditions. The results show that it is possible to associate the first component of Pender's health promotion model with the everyday lives of students in terms of the physical activity practice.

  5. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  6. Mathematical modeling of the infrastructure of attosecond actuators and femtosecond sensors of nonequilibrium physical media in smart materials

    NASA Astrophysics Data System (ADS)

    Beznosyuk, Sergey A.; Maslova, Olga A.; Zhukovsky, Mark S.; Valeryeva, Ekaterina V.; Terentyeva, Yulia V.

    2017-12-01

    The task of modeling the multiscale infrastructure of quantum attosecond actuators and femtosecond sensors of nonequilibrium physical media in smart materials is considered. Computer design and calculation of supra-atomic femtosecond sensors of nonequilibrium physical media in materials based on layered graphene-transition metal nanosystems are carried out by vdW-DF and B3LYP methods. It is shown that the molybdenum substrate provides fixation of graphene nanosheets by Van der Waals forces at a considerable distance (5.3 Å) from the metal surface. This minimizes the effect of the electronic and nuclear subsystem of the substrate metal on the sensory properties of "pure" graphene. The conclusion is substantiated that graphene-molybdenum nanosensors are able to accurately orient and position one molecule of carbon monoxide. It is shown that graphene selectively adsorbs CO and fixes the oxygen atom of the molecule at the position of the center of the graphene ring C6.

  7. THE EFFECTS OF VIDEO MODELING WITH VOICEOVER INSTRUCTION ON ACCURATE IMPLEMENTATION OF DISCRETE-TRIAL INSTRUCTION

    PubMed Central

    Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI. PMID:22844149

  8. The effects of video modeling with voiceover instruction on accurate implementation of discrete-trial instruction.

    PubMed

    Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI.

  9. Impact of detector simulation in particle physics collider experiments

    NASA Astrophysics Data System (ADS)

    Daniel Elvira, V.

    2017-06-01

    Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.

  10. Teaching physical activities to students with significant disabilities using video modeling.

    PubMed

    Cannella-Malone, Helen I; Mizrachi, Sharona V; Sabielny, Linsey M; Jimenez, Eliseo D

    2013-06-01

    The objective of this study was to examine the effectiveness of video modeling on teaching physical activities to three adolescents with significant disabilities. The study implemented a multiple baseline across six physical activities (three per student): jumping rope, scooter board with cones, ladder drill (i.e., feet going in and out), ladder design (i.e., multiple steps), shuttle run, and disc ride. Additional prompt procedures (i.e., verbal, gestural, visual cues, and modeling) were implemented within the study. After the students mastered the physical activities, we tested to see if they would link the skills together (i.e., complete an obstacle course). All three students made progress learning the physical activities, but only one learned them with video modeling alone (i.e., without error correction). Video modeling can be an effective tool for teaching students with significant disabilities various physical activities, though additional prompting procedures may be needed.

  11. USE OF TRANS-CONTEXTUAL MODEL-BASED PHYSICAL ACTIVITY COURSE IN DEVELOPING LEISURE-TIME PHYSICAL ACTIVITY BEHAVIOR OF UNIVERSITY STUDENTS.

    PubMed

    Müftüler, Mine; İnce, Mustafa Levent

    2015-08-01

    This study examined how a physical activity course based on the Trans-Contextual Model affected the variables of perceived autonomy support, autonomous motivation, determinants of leisure-time physical activity behavior, basic psychological needs satisfaction, and leisure-time physical activity behaviors. The participants were 70 Turkish university students (M age=23.3 yr., SD=3.2). A pre-test-post-test control group design was constructed. Initially, the participants were randomly assigned into an experimental (n=35) and a control (n=35) group. The experimental group followed a 12 wk. trans-contextual model-based intervention. The participants were pre- and post-tested in terms of Trans-Contextual Model constructs and of self-reported leisure-time physical activity behaviors. Multivariate analyses showed significant increases over the 12 wk. period for perceived autonomy support from instructor and peers, autonomous motivation in leisure-time physical activity setting, positive intention and perceived behavioral control over leisure-time physical activity behavior, more fulfillment of psychological needs, and more engagement in leisure-time physical activity behavior in the experimental group. These results indicated that the intervention was effective in developing leisure-time physical activity and indicated that the Trans-Contextual Model is a useful way to conceptualize these relationships.

  12. A model for undergraduate physics major outcomes objectives

    NASA Astrophysics Data System (ADS)

    Taylor, G. R.; Erwin, T. Dary

    1989-06-01

    Concern with assessment of student outcomes of undergraduate physics major programs is rapidly rising. The Southern Association of Colleges and Schools and many other regional and state organizations are requiring explicit outcomes assessment in the accrediting process. The first step in this assessment process for major programs is the establishment of student outcomes objectives. A model and set of physics outcomes (educational) objectives that were developed by the faculty in the Physics Department at James Madison University are presented.

  13. Constraining new physics models with isotope shift spectroscopy

    NASA Astrophysics Data System (ADS)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  14. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  15. Chemoviscosity modeling for thermosetting resins

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Hou, T. H.; Bai, J. M.

    1985-01-01

    A chemoviscosity model, which describes viscosity rise profiles accurately under various cure cycles, and correlates viscosity data to the changes of physical properties associated with structural transformations of the thermosetting resin system during cure, was established. Work completed on chemoviscosity modeling for thermosetting resins is reported.

  16. Peppytides: Interactive Models of Polypeptide Chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuckermann, Ron; Chakraborty, Promita; Derisi, Joe

    2014-01-21

    Peppytides are scaled, 3D-printed models of polypeptide chains that can be folded into accurate protein structures. Designed and created by Berkeley Lab Researcher, Promita Chakraborty, and Berkeley Lab Senior Scientist, Dr. Ron Zuckermann, Peppytides are accurate physical models of polypeptide chains that anyone can interact with and fold intro various protein structures - proving to be a great educational tool, resulting in a deeper understanding of these fascinating structures and how they function. Build your own Peppytide model and learn about how nature's machines fold into their intricate architectures!

  17. Peppytides: Interactive Models of Polypeptide Chains

    ScienceCinema

    Zuckermann, Ron; Chakraborty, Promita; Derisi, Joe

    2018-06-08

    Peppytides are scaled, 3D-printed models of polypeptide chains that can be folded into accurate protein structures. Designed and created by Berkeley Lab Researcher, Promita Chakraborty, and Berkeley Lab Senior Scientist, Dr. Ron Zuckermann, Peppytides are accurate physical models of polypeptide chains that anyone can interact with and fold intro various protein structures - proving to be a great educational tool, resulting in a deeper understanding of these fascinating structures and how they function. Build your own Peppytide model and learn about how nature's machines fold into their intricate architectures!

  18. Physics Bus: An Innovative Model for Public Engagement

    NASA Astrophysics Data System (ADS)

    Fox, Claire

    The Physics Bus is about doing science for fun. It is an innovative model for science outreach whose mission is to awaken joy and excitement in physics for all ages and walks of life - especially those underserved by science enrichment. It is a mobile exhibition of upcycled appliances-reimagined by kids-that showcase captivating physics phenomena. Inside our spaceship-themed school bus, visitors will find: a microwave ionized-gas disco-party, fog rings that shoot from a wheelbarrow tire, a tv whose electron beam is controlled by a toy keyboard, and over 20 other themed exhibits. The Physics Bus serves a wide range of public in diverse locations from local neighborhoods, urban parks and rural schools, to cross-country destinations. Its approachable, friendly and relaxed environment allows for self-paced and self-directed interactions, providing a positive and engaging experience with science. We believe that this environment enriches lives and inspires people. In this presentation we will talk about the nuts and bolts that make this model work, how the project got started, and the resources that keep it going. We will talk about the advantages of being a grassroots and community-based organization, and how programs like this can best interface with universities. We will explain the benefits of focusing on direct interactions and why our model avoids ``teaching'' physics content with words. Situating our approach within a body of research on the value of informal science we will discuss our success in capturing and engaging our audience. By the end of this presentation we hope to broaden your perception of what makes a successful outreach program and encourage you to value and support alternative outreach models such as this one. In Collaboration with: Eva Luna, Cornell University; Erik Herman, Cornell University; Christopher Bell, Ithaca City School District.

  19. Understanding physical (in-) activity, overweight, and obesity in childhood: Effects of congruence between physical self-concept and motor competence.

    PubMed

    Utesch, T; Dreiskämper, D; Naul, R; Geukes, K

    2018-04-12

    Both the physical self-concept and actual motor competence are important for healthy future physical activity levels and consequently decrease overweight and obesity in childhood. However, children scoring high on motor competence do not necessarily report high levels of physical self-concept and vice versa, resulting in respective (in-) accuracy also referred to as (non-) veridicality. This study examines whether children's accuracy of physical self-concept is a meaningful predictive factor for their future physical activity. Motor competence, physical self-concept and physical activity were assessed in 3 rd grade and one year later in 4 th grade. Children's weight status was categorized based on WHO recommendations. Polynomial regression with Response surface analyses were conducted with a quasi-DIF approach examining moderating weight status effects. Analyses revealed that children with higher motor competence levels and higher self-perceptions show greater physical activity. Importantly, children who perceive their motor competence more accurately (compared to less) show more future physical activity. This effect is strong for underweight and overweight/obese children, but weak for normal weight children. This study indicates that an accurate self-perception of motor competence fosters future physical activity beyond single main effects, respectively. Hence, the promotion of actual motor competence should be linked with the respective development of accurate self-knowledge.

  20. Time-Centric Models For Designing Embedded Cyber-physical Systems

    DTIC Science & Technology

    2009-10-09

    Time -centric Models For Designing Embedded Cyber- physical Systems John C. Eidson Edward A. Lee Slobodan Matic Sanjit A. Seshia Jia Zou Electrical... Time -centric Models For Designing Embedded Cyber-physical Systems ∗ John C. Eidson , Edward A. Lee, Slobodan Matic, Sanjit A. Seshia, Jia Zou...implementations, such a uniform notion of time cannot be precisely realized. Time triggered networks [10] and time synchronization [9] can be used to

  1. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  2. Patients' mental models and adherence to outpatient physical therapy home exercise programs.

    PubMed

    Rizzo, Jon

    2015-05-01

    Within physical therapy, patient adherence usually relates to attending appointments, following advice, and/or undertaking prescribed exercise. Similar to findings for general medical adherence, patient adherence to physical therapy home exercise programs (HEP) is estimated between 35 and 72%. Adherence to HEPs is a multifactorial and poorly understood phenomenon, with no consensus regarding a common theoretical framework that best guides empirical or clinical efforts. Mental models, a construct used to explain behavior and decision-making in the social sciences, may serve as this framework. Mental models comprise an individual's tacit thoughts about how the world works. They include assumptions about new experiences and expectations for the future based on implicit comparisons between current and past experiences. Mental models play an important role in decision-making and guiding actions. This professional theoretical article discusses empirical research demonstrating relationships among mental models, prior experience, and adherence decisions in medical and physical therapy contexts. Specific issues related to mental models and physical therapy patient adherence are discussed, including the importance of articulation of patients' mental models, assessment of patients' mental models that relate to exercise program adherence, discrepancy between patient and provider mental models, and revision of patients' mental models in ways that enhance adherence. The article concludes with practical implications for physical therapists and recommendations for further research to better understand the role of mental models in physical therapy patient adherence behavior.

  3. Novel Physical Model for DC Partial Discharge in Polymeric Insulators

    NASA Astrophysics Data System (ADS)

    Andersen, Allen; Dennison, J. R.

    The physics of DC partial discharge (DCPD) continues to pose a challenge to researchers. We present a new physically-motivated model of DCPD in amorphous polymers based on our dual-defect model of dielectric breakdown. The dual-defect model is an extension of standard static mean field theories, such as the Crine model, that describe avalanche breakdown of charge carriers trapped on uniformly distributed defect sites. It assumes the presence of both high-energy chemical defects and low-energy thermally-recoverable physical defects. We present our measurements of breakdown and DCPD for several common polymeric materials in the context of this model. Improved understanding of DCPD and how it relates to eventual dielectric breakdown is critical to the fields of spacecraft charging, high voltage DC power distribution, high density capacitors, and microelectronics. This work was supported by a NASA Space Technology Research Fellowship.

  4. The illness/non-illness model: hypnotherapy for physically ill patients.

    PubMed

    Navon, Shaul

    2014-07-01

    This article proposes a focused, novel sub-set of the cognitive behavioral therapy approach to hypnotherapy for physically ill patients, based upon the illness/non-illness psychotherapeutic model for physically ill patients. The model is based on three logical rules used in differentiating illness from non-illness: duality, contradiction, and complementarity. The article discusses the use of hypnotic interventions to help physically ill and/or disabled patients distinguish between illness and non-illness in their psychotherapeutic themes and attitudes. Two case studies illustrate that patients in this special population group can be taught to learn the language of change and to use this language to overcome difficult situations. The model suggests a new clinical mode of treatment in which individuals who are physically ill and/or disabled are helped in coping with actual motifs and thoughts related to non-illness or non-disability.

  5. Manually locating physical and virtual reality objects.

    PubMed

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  6. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  7. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  8. An accurate metric for the spacetime around rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  9. A gauged finite-element potential formulation for accurate inductive and galvanic modelling of 3-D electromagnetic problems

    NASA Astrophysics Data System (ADS)

    Ansari, S. M.; Farquharson, C. G.; MacLachlan, S. P.

    2017-07-01

    In this paper, a new finite-element solution to the potential formulation of the geophysical electromagnetic (EM) problem that explicitly implements the Coulomb gauge, and that accurately computes the potentials and hence inductive and galvanic components, is proposed. The modelling scheme is based on using unstructured tetrahedral meshes for domain subdivision, which enables both realistic Earth models of complex geometries to be considered and efficient spatially variable refinement of the mesh to be done. For the finite-element discretization edge and nodal elements are used for approximating the vector and scalar potentials respectively. The issue of non-unique, incorrect potentials from the numerical solution of the usual incomplete-gauged potential system is demonstrated for a benchmark model from the literature that uses an electric-type EM source, through investigating the interface continuity conditions for both the normal and tangential components of the potential vectors, and by showing inconsistent results obtained from iterative and direct linear equation solvers. By explicitly introducing the Coulomb gauge condition as an extra equation, and by augmenting the Helmholtz equation with the gradient of a Lagrange multiplier, an explicitly gauged system for the potential formulation is formed. The solution to the discretized form of this system is validated for the above-mentioned example and for another classic example that uses a magnetic EM source. In order to stabilize the iterative solution of the gauged system, a block diagonal pre-conditioning scheme that is based upon the Schur complement of the potential system is used. For all examples, both the iterative and direct solvers produce the same responses for the potentials, demonstrating the uniqueness of the numerical solution for the potentials and fixing the problems with the interface conditions between cells observed for the incomplete-gauged system. These solutions of the gauged system also

  10. A haptic model of vibration modes in spherical geometry and its application in atomic physics, nuclear physics and beyond

    NASA Astrophysics Data System (ADS)

    Ubben, Malte; Heusler, Stefan

    2018-07-01

    Vibration modes in spherical geometry can be classified based on the number and position of nodal planes. However, the geometry of these planes is non-trivial and cannot be easily displayed in two dimensions. We present 3D-printed models of those vibration modes, enabling a haptic approach for understanding essential features of bound states in quantum physics and beyond. In particular, when applied to atomic physics, atomic orbitals are obtained in a natural manner. Applied to nuclear physics, the same patterns of vibration modes emerge as cornerstone for the nuclear shell model. These applications of the very same model in a range of more than 5 orders of magnitude in length scales leads to a general discussion of the applicability and limits of validity of physical models in general.

  11. Investigating ice cliff evolution and contribution to glacier mass-balance using a physically-based dynamic model

    NASA Astrophysics Data System (ADS)

    Buri, Pascal; Miles, Evan; Ragettli, Silvan; Brun, Fanny; Steiner, Jakob; Pellicciotti, Francesca

    2016-04-01

    Supraglacial cliffs are a surface feature typical of debris-covered glaciers, affecting surface evolution, glacier downwasting and mass balance by providing a direct ice-atmosphere interface. As a result, melt rates can be very high and ice cliffs may account for a significant portion of the total glacier mass loss. However, their contribution to glacier mass balance has rarely been quantified through physically-based models. Most cliff energy balance models are point scale models which calculate energy fluxes at individual cliff locations. Results from the only grid based model to date accurately reflect energy fluxes and cliff melt, but modelled backwasting patterns are in some cases unrealistic, as the distribution of melt rates would lead to progressive shallowing and disappearance of cliffs. Based on a unique multitemporal dataset of cliff topography and backwasting obtained from high-resolution terrestrial and aerial Structure-from-Motion analysis on Lirung Glacier in Nepal, it is apparent that cliffs exhibit a range of behaviours but most do not rapidly disappear. The patterns of evolution cannot be explained satisfactorily by atmospheric melt alone, and are moderated by the presence of supraglacial ponds at the base of cliffs and by cliff reburial with debris. Here, we document the distinct patterns of evolution including disappearance, growth and stability. We then use these observations to improve the grid-based energy balance model, implementing periodic updates of the cliff geometry resulting from modelled melt perpendicular to the ice surface. Based on a slope threshold, pixels can be reburied by debris or become debris-free. The effect of ponds are taken into account through enhanced melt rates in horizontal direction on pixels selected based on an algorithm considering distance to the water surface, slope and lake level. We use the dynamic model to first study the evolution of selected cliffs for which accurate, high resolution DEMs are available

  12. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  13. Mental Models in Expert Physics Reasoning.

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Greeno, James G.

    Proposed is a relational framework for characterizing experienced physicists' representations of physics problem situations and the process of constructing these representations. A representation includes a coherent set of relations among: (1) a mental model of the objects in the situation, along with their relevant properties and relations; (2) a…

  14. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    PubMed

    Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo

    2017-01-01

    Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have

  15. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    PubMed Central

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact

  16. De-embedding technique for accurate modeling of compact 3D MMIC CPW transmission lines

    NASA Astrophysics Data System (ADS)

    Pohan, U. H.; KKyabaggu, P. B.; Sinulingga, E. P.

    2018-02-01

    Requirement for high-density and high-functionality microwave and millimeter-wave circuits have led to the innovative circuit architectures such as three-dimensional multilayer MMICs. The major advantage of the multilayer techniques is that one can employ passive and active components based on CPW technology. In this work, MMIC Coplanar Waveguide(CPW)components such as Transmission Line (TL) are modeled in their 3D layouts. Main characteristics of CPWTL suffered from the probe pads’ parasitic and resonant frequency effects have been studied. By understanding the parasitic effects, then the novel de-embedding technique are developed accurately in order to predict high frequency characteristics of the designed MMICs. The novel de-embedding technique has shown to be critical in reducing the probe pad parasitic significantly from the model. As results, high frequency characteristics of the designed MMICs have been presented with minimumparasitic effects of the probe pads. The de-embedding process optimises the determination of main characteristics of Compact 3D MMIC CPW transmission lines.

  17. The effectiveness of collaborative problem based physics learning (CPBPL) model to improve student’s self-confidence on physics learning

    NASA Astrophysics Data System (ADS)

    Prahani, B. K.; Suprapto, N.; Suliyanah; Lestari, N. A.; Jauhariyah, M. N. R.; Admoko, S.; Wahyuni, S.

    2018-03-01

    In the previous research, Collaborative Problem Based Physic Learning (CPBPL) model has been developed to improve student’s science process skills, collaborative problem solving, and self-confidence on physics learning. This research is aimed to analyze the effectiveness of CPBPL model towards the improvement of student’s self-confidence on physics learning. This research implemented quasi experimental design on 140 senior high school students who were divided into 4 groups. Data collection was conducted through questionnaire, observation, and interview. Self-confidence measurement was conducted through Self-Confidence Evaluation Sheet (SCES). The data was analyzed using Wilcoxon test, n-gain, and Kruskal Wallis test. Result shows that: (1) There is a significant score improvement on student’s self-confidence on physics learning (α=5%), (2) n-gain value student’s self-confidence on physics learning is high, and (3) n-gain average student’s self-confidence on physics learning was consistent throughout all groups. It can be concluded that CPBPL model is effective to improve student’s self-confidence on physics learning.

  18. Physical composition

    NASA Astrophysics Data System (ADS)

    Healey, Richard

    2013-02-01

    Atomistic metaphysics motivated an explanatory strategy which science has pursued with great success since the scientific revolution. By decomposing matter into its atomic and subatomic parts physics gave us powerful explanations and accurate predictions as well as providing a unifying framework for the rest of science. The success of the decompositional strategy has encouraged a widespread conviction that the physical world forms a compositional hierarchy that physics and other sciences are progressively articulating. But this conviction does not stand up to a closer examination of how physics has treated composition, as a variety of case studies will show.

  19. Physical models of polarization mode dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menyuk, C.R.; Wai, P.K.A.

    The effect of randomly varying birefringence on light propagation in optical fibers is studied theoretically in the parameter regime that will be used for long-distance communications. In this regime, the birefringence is large and varies very rapidly in comparison to the nonlinear and dispersive scale lengths. We determine the polarization mode dispersion, and we show that physically realistic models yield the same result for polarization mode dispersion as earlier heuristic models that were introduced by Poole. We also prove an ergodic theorem.

  20. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  1. PYTHIA 6.4 Physics and Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjostrand, Torbjorn; /Lund U., Dept. Theor. Phys.; Mrenna, Stephen

    2006-03-01

    The Pythia program can be used to generate high-energy-physics ''events'', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a role, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. Thismore » physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.« less

  2. On the effects of adaptive reservoir operating rules in hydrological physically-based models

    NASA Astrophysics Data System (ADS)

    Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo

    2017-04-01

    Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

  3. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    PubMed Central

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  4. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    PubMed

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  5. A physical model for low-frequency electromagnetic induction in the near field based on direct interaction between transmitter and receiver electrons.

    PubMed

    Smith, Ray T; Jjunju, Fred P M; Young, Iain S; Taylor, Stephen; Maher, Simon

    2016-07-01

    A physical model of electromagnetic induction is developed which relates directly the forces between electrons in the transmitter and receiver windings of concentric coaxial finite coils in the near-field region. By applying the principle of superposition, the contributions from accelerating electrons in successive current loops are summed, allowing the peak-induced voltage in the receiver to be accurately predicted. Results show good agreement between theory and experiment for various receivers of different radii up to five times that of the transmitter. The limitations of the linear theory of electromagnetic induction are discussed in terms of the non-uniform current distribution caused by the skin effect. In particular, the explanation in terms of electromagnetic energy and Poynting's theorem is contrasted with a more direct explanation based on variable filament induction across the conductor cross section. As the direct physical model developed herein deals only with forces between discrete current elements, it can be readily adapted to suit different coil geometries and is widely applicable in various fields of research such as near-field communications, antenna design, wireless power transfer, sensor applications and beyond.

  6. 3D physical modeling for patterning process development

    NASA Astrophysics Data System (ADS)

    Sarma, Chandra; Abdo, Amr; Bailey, Todd; Conley, Will; Dunn, Derren; Marokkey, Sajan; Talbi, Mohamed

    2010-03-01

    In this paper we will demonstrate how a 3D physical patterning model can act as a forensic tool for OPC and ground-rule development. We discuss examples where the 2D modeling shows no issues in printing gate lines but 3D modeling shows severe resist loss in the middle. In absence of corrective measure, there is a high likelihood of line discontinuity post etch. Such early insight into process limitations of prospective ground rules can be invaluable for early technology development. We will also demonstrate how the root cause of broken poly-line after etch could be traced to resist necking in the region of STI step with the help of 3D models. We discuss different cases of metal and contact layouts where 3D modeling gives an early insight in to technology limitations. In addition such a 3D physical model could be used for early resist evaluation and selection for required ground-rule challenges, which can substantially reduce the cycle time for process development.

  7. Meta II: Multi-Model Language Suite for Cyber Physical Systems

    DTIC Science & Technology

    2013-03-01

    AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling

  8. Statistical physics of medical diagnostics: Study of a probabilistic model.

    PubMed

    Mashaghi, Alireza; Ramezanpour, Abolfazl

    2018-03-01

    We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.

  9. Statistical physics of medical diagnostics: Study of a probabilistic model

    NASA Astrophysics Data System (ADS)

    Mashaghi, Alireza; Ramezanpour, Abolfazl

    2018-03-01

    We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.

  10. Data-driven modeling, control and tools for cyber-physical energy systems

    NASA Astrophysics Data System (ADS)

    Behl, Madhur

    Energy systems are experiencing a gradual but substantial change in moving away from being non-interactive and manually-controlled systems to utilizing tight integration of both cyber (computation, communications, and control) and physical representations guided by first principles based models, at all scales and levels. Furthermore, peak power reduction programs like demand response (DR) are becoming increasingly important as the volatility on the grid continues to increase due to regulation, integration of renewables and extreme weather conditions. In order to shield themselves from the risk of price volatility, end-user electricity consumers must monitor electricity prices and be flexible in the ways they choose to use electricity. This requires the use of control-oriented predictive models of an energy system's dynamics and energy consumption. Such models are needed for understanding and improving the overall energy efficiency and operating costs. However, learning dynamical models using grey/white box approaches is very cost and time prohibitive since it often requires significant financial investments in retrofitting the system with several sensors and hiring domain experts for building the model. We present the use of data-driven methods for making model capture easy and efficient for cyber-physical energy systems. We develop Model-IQ, a methodology for analysis of uncertainty propagation for building inverse modeling and controls. Given a grey-box model structure and real input data from a temporary set of sensors, Model-IQ evaluates the effect of the uncertainty propagation from sensor data to model accuracy and to closed-loop control performance. We also developed a statistical method to quantify the bias in the sensor measurement and to determine near optimal sensor placement and density for accurate data collection for model training and control. Using a real building test-bed, we show how performing an uncertainty analysis can reveal trends about

  11. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  12. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function

    PubMed Central

    Gruber-Baldini, Ann L.; Hicks, Gregory; Ostir, Glen; Klinedinst, N. Jennifer; Orwig, Denise; Magaziner, Jay

    2015-01-01

    Background Measurement of physical function post hip fracture has been conceptualized using multiple different measures. Purpose This study tested a comprehensive measurement model of physical function. Design This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Methods Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living and performance was tested for fit at 2 and 12 months post hip fracture and among male and female participants and validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise and social activities post hip fracture. Findings The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Conclusion Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participant Clinical Implications The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. Practical but useful assessment of function should be considered and monitored over the recovery trajectory post hip fracture. PMID:26492866

  13. Toward University Modeling Instruction—Biology: Adapting Curricular Frameworks from Physics to Biology

    PubMed Central

    Manthey, Seth; Brewe, Eric

    2013-01-01

    University Modeling Instruction (UMI) is an approach to curriculum and pedagogy that focuses instruction on engaging students in building, validating, and deploying scientific models. Modeling Instruction has been successfully implemented in both high school and university physics courses. Studies within the physics education research (PER) community have identified UMI's positive impacts on learning gains, equity, attitudinal shifts, and self-efficacy. While the success of this pedagogical approach has been recognized within the physics community, the use of models and modeling practices is still being developed for biology. Drawing from the existing research on UMI in physics, we describe the theoretical foundations of UMI and how UMI can be adapted to include an emphasis on models and modeling for undergraduate introductory biology courses. In particular, we discuss our ongoing work to develop a framework for the first semester of a two-semester introductory biology course sequence by identifying the essential basic models for an introductory biology course sequence. PMID:23737628

  14. Toward university modeling instruction--biology: adapting curricular frameworks from physics to biology.

    PubMed

    Manthey, Seth; Brewe, Eric

    2013-06-01

    University Modeling Instruction (UMI) is an approach to curriculum and pedagogy that focuses instruction on engaging students in building, validating, and deploying scientific models. Modeling Instruction has been successfully implemented in both high school and university physics courses. Studies within the physics education research (PER) community have identified UMI's positive impacts on learning gains, equity, attitudinal shifts, and self-efficacy. While the success of this pedagogical approach has been recognized within the physics community, the use of models and modeling practices is still being developed for biology. Drawing from the existing research on UMI in physics, we describe the theoretical foundations of UMI and how UMI can be adapted to include an emphasis on models and modeling for undergraduate introductory biology courses. In particular, we discuss our ongoing work to develop a framework for the first semester of a two-semester introductory biology course sequence by identifying the essential basic models for an introductory biology course sequence.

  15. Prediction of protein loop conformations using multiscale modeling methods with physical energy scoring functions.

    PubMed

    Olson, Mark A; Feig, Michael; Brooks, Charles L

    2008-04-15

    This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy

  16. Model Independent Search For New Physics At The Tevatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudalakis, Georgios

    2008-04-01

    The Standard Model of elementary particles can not be the final theory. There are theoretical reasons to expect the appearance of new physics, possibly at the energy scale of few TeV. Several possible theories of new physics have been proposed, each with unknown probability to be confirmed. Instead of arbitrarily choosing to examine one of those theories, this thesis is about searching for any sign of new physics in a model-independent way. This search is performed at the Collider Detector at Fermilab (CDF). The Standard Model prediction is implemented in all final states simultaneously, and an array of statistical probesmore » is employed to search for significant discrepancies between data and prediction. The probes are sensitive to overall population discrepancies, shape disagreements in distributions of kinematic quantities of final particles, excesses of events of large total transverse momentum, and local excesses of data expected from resonances due to new massive particles. The result of this search, first in 1 fb -1 and then in 2 fb -1, is null, namely no considerable evidence of new physics was found.« less

  17. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  18. Influence of a health-related physical fitness model on students' physical activity, perceived competence, and enjoyment.

    PubMed

    Fu, You; Gao, Zan; Hannon, James; Shultz, Barry; Newton, Maria; Sibthorp, Jim

    2013-12-01

    This study was designed to explore the effects of a health-related physical fitness physical education model on students' physical activity, perceived competence, and enjoyment. 61 students (25 boys, 36 girls; M age = 12.6 yr., SD = 0.6) were assigned to two groups (health-related physical fitness physical education group, and traditional physical education group), and participated in one 50-min. weekly basketball class for 6 wk. Students' in-class physical activity was assessed using NL-1000 pedometers. The physical subscale of the Perceived Competence Scale for Children was employed to assess perceived competence, and children's enjoyment was measured using the Sport Enjoyment Scale. The findings suggest that students in the intervention group increased their perceived competence, enjoyment, and physical activity over a 6-wk. intervention, while the comparison group simply increased physical activity over time. Children in the intervention group had significantly greater enjoyment.

  19. Physical examination of the athlete's elbow.

    PubMed

    Hsu, Stephanie H; Moen, Todd C; Levine, William N; Ahmad, Christopher S

    2012-03-01

    Elbow injury is encountered less frequently than are other joint conditions. The bony architecture, muscle, ligament, and nerve anatomy are complex, and the forces leading to injury in the athlete's elbow are unique. Appreciating the pathomechanics leading to injury and a detailed knowledge of elbow anatomy are the foundation for conducting a directed history and physical examination that achieves an accurate diagnosis. Recent advances in physical examination have improved our ability to accurately diagnose and treat athletic elbow disorders. This article reviews general and focused physical examination maneuvers of the elbow in a systematic anatomic fashion.

  20. Risk Management and Physical Modelling for Mountainous Natural Hazards

    NASA Astrophysics Data System (ADS)

    Lehning, Michael; Wilhelm, Christian

    Population growth and climate change cause rapid changes in mountainous regions resulting in increased risks of floods, avalanches, debris flows and other natural hazards. Xevents are of particular concern, since attempts to protect against them result in exponentially growing costs. In this contribution, we suggest an integral risk management approach to dealing with natural hazards that occur in mountainous areas. Using the example of a mountain pass road, which can be protected from the danger of an avalanche by engineering (galleries) and/or organisational (road closure) measures, we show the advantage of an optimal combination of both versus the traditional approach, which is to rely solely on engineering structures. Organisational measures become especially important for Xevents because engineering structures cannot be designed for those events. However, organisational measures need a reliable and objective forecast of the hazard. Therefore, we further suggest that such forecasts should be developed using physical numerical modelling. We present the status of current approaches to using physical modelling to predict snow cover stability for avalanche warnings and peak runoff from mountain catchments for flood warnings. While detailed physical models can already predict peak runoff reliably, they are only used to support avalanche warnings. With increased process knowledge and computer power, current developments should lead to a enhanced role for detailed physical models in natural mountain hazard prediction.

  1. Temporal self-regulation theory: a neurobiologically informed model for physical activity behavior

    PubMed Central

    Hall, Peter A.; Fong, Geoffrey T.

    2015-01-01

    Dominant explanatory models for physical activity behavior are limited by the exclusion of several important components, including temporal dynamics, ecological forces, and neurobiological factors. The latter may be a critical omission, given the relevance of several aspects of cognitive function for the self-regulatory processes that are likely required for consistent implementation of physical activity behavior in everyday life. This narrative review introduces temporal self-regulation theory (TST; Hall and Fong, 2007, 2013) as a new explanatory model for physical activity behavior. Important features of the model include consideration of the default status of the physical activity behavior, as well as the disproportionate influence of temporally proximal behavioral contingencies. Most importantly, the TST model proposes positive feedback loops linking executive function (EF) and the performance of physical activity behavior. Specifically, those with relatively stronger executive control (and optimized brain structures supporting it, such as the dorsolateral prefrontal cortex (PFC)) are able to implement physical activity with more consistency than others, which in turn serves to strengthen the executive control network itself. The TST model has the potential to explain everyday variants of incidental physical activity, sport-related excellence via capacity for deliberate practice, and variability in the propensity to schedule and implement exercise routines. PMID:25859196

  2. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  3. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE PAGES

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...

    2017-07-12

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  4. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  5. Self-consistent core-pedestal transport simulations with neural network accelerated models

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  6. Modelling the physical properties of glasslike carbon foams

    NASA Astrophysics Data System (ADS)

    Letellier, M.; Macutkevic, J.; Bychanok, D.; Kuzhir, P.; Delgado-Sanchez, C.; Naguib, H.; Ghaffari Mosanenzadeh, S.; Fierro, V.; Celzard, A.

    2017-07-01

    In this work, model alveolar materials - carbon cellular and/or carbon reticulated foams - were produced in order to study and to model their physical properties. It was shown that very different morphologies could be obtained whereas the constituting vitreous carbon from which they were made remained exactly the same. Doing so, the physical properties of these foams were expected to depend neither on the composition nor on the carbonaceous texture but only on the porous structure, which could be tuned for the first time for having a constant pore size in a range of porosities, or a range of pore sizes at fixed porosity. The physical properties were then investigated through mechanical, acoustic, thermal and electromagnetic measurements. The results demonstrate the roles played by bulk density and cell size on all physical properties. Whereas some of the latter strongly depend on porosity and/or pore size, others are independent of pore size. It is expected that these results apply to many other kinds of rigid foams used in a broad range of different applications. The present results therefore open the route to their optimisation.

  7. Precision Higgs Boson Physics and Implications for Beyond the Standard Model Physics Theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, James

    The discovery of the Higgs boson is one of science's most impressive recent achievements. We have taken a leap forward in understanding what is at the heart of elementary particle mass generation. We now have a significant opportunity to develop even deeper understanding of how the fundamental laws of nature are constructed. As such, we need intense focus from the scientific community to put this discovery in its proper context, to realign and narrow our understanding of viable theory based on this positive discovery, and to detail the implications the discovery has for theories that attempt to answer questions beyondmore » what the Standard Model can explain. This project's first main object is to develop a state-of-the-art analysis of precision Higgs boson physics. This is to be done in the tradition of the electroweak precision measurements of the LEP/SLC era. Indeed, the electroweak precision studies of the past are necessary inputs to the full precision Higgs program. Calculations will be presented to the community of Higgs boson observables that detail just how well various couplings of the Higgs boson can be measured, and more. These will be carried out using state-of-the-art theory computations coupled with the new experimental results coming in from the LHC. The project's second main objective is to utilize the results obtained from LHC Higgs boson experiments and the precision analysis, along with the direct search studies at LHC, and discern viable theories of physics beyond the Standard Model that unify physics to a deeper level. Studies will be performed on supersymmetric theories, theories of extra spatial dimensions (and related theories, such as compositeness), and theories that contain hidden sector states uniquely accessible to the Higgs boson. In addition, if data becomes incompatible with the Standard Model's low-energy effective lagrangian, new physics theories will be developed that explain the anomaly and put it into a more unified

  8. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  9. Assessing the Integration of Computational Modeling and ASU Modeling Instruction in the High School Physics Classroom

    NASA Astrophysics Data System (ADS)

    Aiken, John; Schatz, Michael; Burk, John; Caballero, Marcos; Thoms, Brian

    2012-03-01

    We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.

  10. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. A Ball Pool Model to Illustrate Higgs Physics to the Public

    ERIC Educational Resources Information Center

    Organtini, Giovanni

    2017-01-01

    A simple model is presented to explain Higgs boson physics to the grand public. The model consists of a children's ball pool representing a Universe filled with a certain amount of the Higgs field. The model is suitable for usage as a hands-on tool in scientific exhibits and provides a clear explanation of almost all the aspects of the physics of…

  12. Physical Accuracy of Q Models of Seismic Attenuation

    NASA Astrophysics Data System (ADS)

    Morozov, I. B.

    2016-12-01

    Accuracy of theoretical models is a required prerequisite for any type of seismic imaging and interpretation. Among all geophysical disciplines, the theory of seismic and tidal attenuation is the least developed, and most practical studies use viscoelastic models based on empirical Q factors. To simplify imaging and inversions, the Qs are often approximated as frequency-independent or following a power law with frequency. However, simplicity of inversion should not outweigh the problematic physical accuracy of such models. Typical images of spatially-variable crustal and mantle Qs are "apparent," analogously to pseudo-depth, apparent-resistivity images in electrical imaging. Problems with Q models can be seen from controversial general observations present in many studies; for example: 1) In global Q models, bulk attenuation is much lower than the shear one throughout the whole Earth. This is considered a fundamental relation for the Earth; nevertheless, it is also very peculiar physically and suggests a negative Q for the Lamé modulus. This relation is also not supported by most first-principle models of materials and laboratory studies. 2) The Q parameterization requires that the entire outer core of the Earth is assigned zero attenuation, despite its large volume, presence of viscosity and shear deformation in free oscillations. 3) In laboratory and surface-wave studies, the bulk and shear Qs can be different for different wave modes, different sample sizes boundary conditions on the surface. Similarly, the Qs measured from body-S, Love, Lg, or ScS waves may not equal each other. 4) In seismic coda studies, the Q is often found to be linearly (or even faster) increasing with frequency. Such character of energy dissipation is controversial physically, but can be readily explained as an artifact of inaccurately-known geometrical spreading. To overcome the physical inaccuracies and apparent character of seismic attenuation models, mechanical theories of materials

  13. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  14. Modelling infrasound signal generation from two underground explosions at the Source Physics Experiment using the Rayleigh integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Kyle R.; Whitaker, Rodney W.; Arrowsmith, Stephen J.

    2014-12-11

    For this study, we use the Rayleigh integral (RI) as an approximation to the Helmholtz–Kirchoff integral to model infrasound generation and propagation from underground chemical explosions at distances of 250 m out to 5 km as part of the Source Physics Experiment (SPE). Using a sparse network of surface accelerometers installed above ground zero, we are able to accurately create synthetic acoustic waveforms and compare them to the observed data. Although the underground explosive sources were designed to be symmetric, the resulting seismic wave at the surface shows an asymmetric propagation pattern that is stronger to the northeast of themore » borehole. This asymmetric bias may be attributed to the subsurface geology and faulting of the area and is observed in the acoustic waveforms. We compare observed and modelled results from two of the underground SPE tests with a sensitivity study to evaluate the asymmetry observed in the data. This work shows that it is possible to model infrasound signals from underground explosive sources using the RI and that asymmetries observed in the data can be modelled with this technique.« less

  15. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-07-01

    Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.

  16. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  17. Inter-model analysis of tsunami-induced coastal currents

    NASA Astrophysics Data System (ADS)

    Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.

    2017-06-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.

  18. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  19. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  20. A physical data model for fields and agents

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data

  1. A Reciprocal Effects Model of Children's Body Fat Self-Concept: Relations With Physical Self-Concept and Physical Activity.

    PubMed

    Garn, Alex C; Morin, Alexandre J S; Martin, Jeffrey; Centeio, Erin; Shen, Bo; Kulik, Noel; Somers, Cheryl; McCaughtry, Nate

    2016-06-01

    This study investigated a reciprocal effects model (REM) of children's body fat self-concept and physical self-concept, and objectively measured school physical activity at different intensities. Grade four students (N = 376; M age = 9.07, SD = .61; 55% boys) from the midwest region of the United States completed measures of physical self-concept and body fat self-concept, and wore accelerometers for three consecutive school days at the beginning and end of one school year. Findings from structural equation modeling analyses did not support reciprocal effects. However, children's body fat self-concept predicted future physical self-concept and moderate-to-vigorous physical activity (MVPA). Multigroup analyses explored the moderating role of weight status, sex, ethnicity, and sex*ethnicity within the REM. Findings supported invariance, suggesting that the observed relations were generalizable for these children across demographic groups. Links between body fat self-concept and future physical self-concept and MVPA highlight self-enhancing effects that can promote children's health and well-being.

  2. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  4. Toward University Modeling Instruction--Biology: Adapting Curricular Frameworks from Physics to Biology

    ERIC Educational Resources Information Center

    Manthey, Seth; Brewe, Eric

    2013-01-01

    University Modeling Instruction (UMI) is an approach to curriculum and pedagogy that focuses instruction on engaging students in building, validating, and deploying scientific models. Modeling Instruction has been successfully implemented in both high school and university physics courses. Studies within the physics education research (PER)…

  5. Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale

    NASA Astrophysics Data System (ADS)

    Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang

    2017-12-01

    The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.

  6. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    NASA Astrophysics Data System (ADS)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  7. Laboratory and Physical Modelling of Building Ventilation Flows

    NASA Astrophysics Data System (ADS)

    Hunt, Gary

    2001-11-01

    Heating and ventilating buildings accounts for a significant fraction of the total energy budget of cities and an immediate challenge in building physics is for the design of sustainable, low-energy buildings. Natural ventilation provides a low-energy solution as it harness the buoyancy force associated with temperature differences between the internal and external environment, and the wind to drive a ventilating flow. Modern naturally-ventilated buildings use innovative design solutions, e.g. glazed atria and solar chimneys, to enhance the ventilation and demand for these and other designs has far outstripped our understanding of the fluid mechanics within these buildings. Developing an understanding of the thermal stratification and movement of air provides a considerable challenge as the flows involve interactions between stratification and turbulence and often in complex geometries. An approach that has provided significant new insight into these flows and which has led to the development of design guidelines for architects and ventilation engineers is laboratory modelling at small-scale in water tanks combined with physical modelling. Density differences to drive the flow in simplified plexiglass models of rooms or buildings are provided by fresh and salt water solutions, and wind flow is represented by a mean flow in a flume tank. In tandom with the experiments, theoretical models that capture the essential physics of these flows have been developed in order to generalise the experimental results to a wide range of typical building geometries and operating conditions. This paper describes the application and outcomes of these modelling techniques to the study of a variety of natural ventilation flows in buildings.

  8. Gas Hydrate Estimation Using Rock Physics Modeling and Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Dai, J.; Dutta, N.; Xu, H.

    2006-05-01

    ABSTRACT We conducted a theoretical study of the effects of gas hydrate saturation on the acoustic properties (P- and S- wave velocities, and bulk density) of host rocks, using wireline log data from the Mallik wells in the Mackenzie Delta in Northern Canada. We evaluated a number of gas hydrate rock physics models that correspond to different rock textures. Our study shows that, among the existing rock physics models, the one that treats gas hydrate as part of the solid matrix best fits the measured data. This model was also tested on gas hydrate hole 995B of ODP leg 164 drilling at Blake Ridge, which shows adequate match. Based on the understanding of rock models of gas hydrates and properties of shallow sediments, we define a procedure that quantifies gas hydrate using rock physics modeling and seismic inversion. The method allows us to estimate gas hydrate directly from seismic information only. This paper will show examples of gas hydrates quantification from both 1D profile and 3D volume in the deepwater of Gulf of Mexico.

  9. Funnel metadynamics as accurate binding free-energy method

    PubMed Central

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  10. The Play Community: A Student-Centered Model for Physical Education

    ERIC Educational Resources Information Center

    Johnson, Tyler G.; Bolter, Nicole D.; Stoll, Sharon Kay

    2014-01-01

    As a result of their participation in K-12 physical education, students should obtain high levels of physical activity and learn motor and/or sport skills. How to accomplish these outcomes in the context of K-12 physical education is a continuous challenge for teachers. The purpose of this article is to introduce the play community model, which…

  11. Physical Models of Cognition

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1994-01-01

    This paper presents and discusses physical models for simulating some aspects of neural intelligence, and, in particular, the process of cognition. The main departure from the classical approach here is in utilization of a terminal version of classical dynamics introduced by the author earlier. Based upon violations of the Lipschitz condition at equilibrium points, terminal dynamics attains two new fundamental properties: it is spontaneous and nondeterministic. Special attention is focused on terminal neurodynamics as a particular architecture of terminal dynamics which is suitable for modeling of information flows. Terminal neurodynamics possesses a well-organized probabilistic structure which can be analytically predicted, prescribed, and controlled, and therefore which presents a powerful tool for modeling real-life uncertainties. Two basic phenomena associated with random behavior of neurodynamic solutions are exploited. The first one is a stochastic attractor ; a stable stationary stochastic process to which random solutions of a closed system converge. As a model of the cognition process, a stochastic attractor can be viewed as a universal tool for generalization and formation of classes of patterns. The concept of stochastic attractor is applied to model a collective brain paradigm explaining coordination between simple units of intelligence which perform a collective task without direct exchange of information. The second fundamental phenomenon discussed is terminal chaos which occurs in open systems. Applications of terminal chaos to information fusion as well as to explanation and modeling of coordination among neurons in biological systems are discussed. It should be emphasized that all the models of terminal neurodynamics are implementable in analog devices, which means that all the cognition processes discussed in the paper are reducible to the laws of Newtonian mechanics.

  12. A Theoretical Model of Children's Storytelling Using Physically-Oriented Technologies (SPOT)

    ERIC Educational Resources Information Center

    Guha, Mona Leigh; Druin, Allison; Montemayor, Jaime; Chipman, Gene; Farber, Allison

    2007-01-01

    This paper develops a model of children's storytelling using Physically-Oriented Technology (SPOT). The SPOT model draws upon literature regarding current physical storytelling technologies and was developed using a grounded theory approach to qualitative research. This empirical work focused on the experiences of 18 children, ages 5-6, who worked…

  13. Physics at a 100 TeV pp Collider: Standard Model Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mangano, M. L.; Zanderighi, G.; Aguilar Saavedra, J. A.

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  14. Accurate Modeling of the Terrestrial Gamma-Ray Background for Homeland Security Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandness, Gerald A.; Schweppe, John E.; Hensley, Walter K.

    2009-10-24

    Abstract–The Pacific Northwest National Laboratory has developed computer models to simulate the use of radiation portal monitors to screen vehicles and cargo for the presence of illicit radioactive material. The gamma radiation emitted by the vehicles or cargo containers must often be measured in the presence of a relatively large gamma-ray background mainly due to the presence of potassium, uranium, and thorium (and progeny isotopes) in the soil and surrounding building materials. This large background is often a significant limit to the detection sensitivity for items of interest and must be modeled accurately for analyzing homeland security situations. Calculations ofmore » the expected gamma-ray emission from a disk of soil and asphalt were made using the Monte Carlo transport code MCNP and were compared to measurements made at a seaport with a high-purity germanium detector. Analysis revealed that the energy spectrum of the measured background could not be reproduced unless the model included gamma rays coming from the ground out to distances of at least 300 m. The contribution from beyond about 50 m was primarily due to gamma rays that scattered in the air before entering the detectors rather than passing directly from the ground to the detectors. These skyshine gamma rays contribute tens of percent to the total gamma-ray spectrum, primarily at energies below a few hundred keV. The techniques that were developed to efficiently calculate the contributions from a large soil disk and a large air volume in a Monte Carlo simulation are described and the implications of skyshine in portal monitoring applications are discussed.« less

  15. Towards a more accurate microscopic description of the moving contact line problem - incorporating nonlocal effects through a statistical mechanics framework

    NASA Astrophysics Data System (ADS)

    Nold, Andreas; Goddard, Ben; Sibley, David; Kalliadasis, Serafim

    2014-03-01

    Multiscale effects play a predominant role in wetting phenomena such as the moving contact line. An accurate description is of paramount interest for a wide range of industrial applications, yet it is a matter of ongoing research, due to the difficulty of incorporating different physical effects in one model. Important small-scale phenomena are corrections to the attractive fluid-fluid and wall-fluid forces in inhomogeneous density distributions, which often previously have been accounted for by the disjoining pressure in an ad-hoc manner. We systematically derive a novel model for the description of a single-component liquid-vapor multiphase system which inherently incorporates these nonlocal effects. This derivation, which is inspired by statistical mechanics in the framework of colloidal density functional theory, is critically discussed with respect to its assumptions and restrictions. The model is then employed numerically to study a moving contact line of a liquid fluid displacing its vapor phase. We show how nonlocal physical effects are inherently incorporated by the model and describe how classical macroscopic results for the contact line motion are retrieved. We acknowledge financial support from ERC Advanced Grant No. 247031 and Imperial College through a DTG International Studentship.

  16. Mathematical Modeling Is Also Physics--Interdisciplinary Teaching between Mathematics and Physics in Danish Upper Secondary Education

    ERIC Educational Resources Information Center

    Michelsen, Claus

    2015-01-01

    Mathematics plays a crucial role in physics. This role is brought about predominantly through the building, employment, and assessment of mathematical models, and teachers and educators should capture this relationship in the classroom in an effort to improve students' achievement and attitude in both physics and mathematics. But although there…

  17. Development of the physics driver in NOAA Environmental Modeling System (NEMS)

    NASA Astrophysics Data System (ADS)

    Lei, H.; Iredell, M.; Tripp, P.

    2016-12-01

    As a key component of the Next Generation Global Prediction System (NGGPS), a physics driver is developed in the NOAA Environmental Modeling System (NEMS) in order to facilitate the research, development, and transition to operations of innovations in atmospheric physical parameterizations. The physics driver connects the atmospheric dynamic core, the Common Community Physics Package and the other NEMS-based forecast components (land, ocean, sea ice, wave, and space weather). In current global forecasting system, the physics driver has incorporated major existing physics packages including radiation, surface physics, cloud and microphysics, ozone, and stochastic physics. The physics driver is also applicable to external physics packages. The structure adjustment in NEMS by separating the PHYS trunk is to create an open physics package pool. This open platform is beneficial to the enhancement of U.S. weather forecast ability. In addition, with the universal physics driver, the NEMS can also be used for specific functions by connecting external target physics packages through physics driver. The test of its function is to connect a physics dust-radiation model in the system. Then the modified system can be used for dust storm prediction and forecast. The physics driver is also developed into a standalone form. This is to facilitate the development works on physics packages. The developers can save instant fields of meteorology data and snapshots from the running system , and then used them as offline driving data fields to test the new individual physics modules or small modifications to current modules. This prevents the run of whole system for every test.

  18. NIMROD Resistive Magnetohydrodynamic Simulations of Spheromak Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, E B; Cohen, B I; McLean, H S

    The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magneto-hydrodynamic simulations with the NIMROD code. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the Sustained Spheromak Physics Experiment (SSPX) (R. D. Wood, et al., Nucl. Fusion 45, 1582 (2005)). The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena and the effects of current profile evolution on the growth ofmore » symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results address variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less

  19. A minimal physical model for crawling cells

    NASA Astrophysics Data System (ADS)

    Tiribocchi, Adriano; Tjhung, Elsen; Marenduzzo, Davide; Cates, Michael E.

    Cell motility in higher organisms (eukaryotes) is fundamental to biological functions such as wound healing or immune response, and is also implicated in diseases such as cancer. For cells crawling on solid surfaces, considerable insights into motility have been gained from experiments replicating such motion in vitro. Such experiments show that crawling uses a combination of actin treadmilling (polymerization), which pushes the front of a cell forward, and myosin-induced stress (contractility), which retracts the rear. We present a simplified physical model of a crawling cell, consisting of a droplet of active polar fluid with contractility throughout, but treadmilling connected to a thin layer near the supporting wall. The model shows a variety of shapes and/or motility regimes, some closely resembling cases seen experimentally. Our work supports the view that cellular motility exploits autonomous physical mechanisms whose operation does not need continuous regulatory effort.

  20. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  1. Tailored motivational message generation: A model and practical framework for real-time physical activity coaching.

    PubMed

    Op den Akker, Harm; Cabrita, Miriam; Op den Akker, Rieks; Jones, Valerie M; Hermens, Hermie J

    2015-06-01

    This paper presents a comprehensive and practical framework for automatic generation of real-time tailored messages in behavior change applications. Basic aspects of motivational messages are time, intention, content and presentation. Tailoring of messages to the individual user may involve all aspects of communication. A linear modular system is presented for generating such messages. It is explained how properties of user and context are taken into account in each of the modules of the system and how they affect the linguistic presentation of the generated messages. The model of motivational messages presented is based on an analysis of existing literature as well as the analysis of a corpus of motivational messages used in previous studies. The model extends existing 'ontology-based' approaches to message generation for real-time coaching systems found in the literature. Practical examples are given on how simple tailoring rules can be implemented throughout the various stages of the framework. Such examples can guide further research by clarifying what it means to use e.g. user targeting to tailor a message. As primary example we look at the issue of promoting daily physical activity. Future work is pointed out in applying the present model and framework, defining efficient ways of evaluating individual tailoring components, and improving effectiveness through the creation of accurate and complete user- and context models. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A physical model of mass ejection in failed supernovae

    NASA Astrophysics Data System (ADS)

    Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel

    2018-06-01

    During the core collapse of massive stars, the formation of the proto-neutron star is accompanied by the emission of a significant amount of mass energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analysing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜ 20-100 per cent of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.

  3. Reading Time as Evidence for Mental Models in Understanding Physics

    NASA Astrophysics Data System (ADS)

    Brookes, David T.; Mestre, José; Stine-Morrow, Elizabeth A. L.

    2007-11-01

    We present results of a reading study that show the usefulness of probing physics students' cognitive processing by measuring reading time. According to contemporary discourse theory, when people read a text, a network of associated inferences is activated to create a mental model. If the reader encounters an idea in the text that conflicts with existing knowledge, the construction of a coherent mental model is disrupted and reading times are prolonged, as measured using a simple self-paced reading paradigm. We used this effect to study how "non-Newtonian" and "Newtonian" students create mental models of conceptual systems in physics as they read texts related to the ideas of Newton's third law, energy, and momentum. We found significant effects of prior knowledge state on patterns of reading time, suggesting that students attempt to actively integrate physics texts with their existing knowledge.

  4. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  5. On the use of a physically-based baseflow timescale in land surface models.

    NASA Astrophysics Data System (ADS)

    Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.

    2017-12-01

    Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.

  6. Design and implementation of space physics multi-model application integration based on web

    NASA Astrophysics Data System (ADS)

    Jiang, Wenping; Zou, Ziming

    With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into

  7. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  8. Accurate diode behavioral model with reverse recovery

    NASA Astrophysics Data System (ADS)

    Banáš, Stanislav; Divín, Jan; Dobeš, Josef; Paňko, Václav

    2018-01-01

    This paper deals with the comprehensive behavioral model of p-n junction diode containing reverse recovery effect, applicable to all standard SPICE simulators supporting Verilog-A language. The model has been successfully used in several production designs, which require its full complexity, robustness and set of tuning parameters comparable with standard compact SPICE diode model. The model is like standard compact model scalable with area and temperature and can be used as a stand-alone diode or as a part of more complex device macro-model, e.g. LDMOS, JFET, bipolar transistor. The paper briefly presents the state of the art followed by the chapter describing the model development and achieved solutions. During precise model verification some of them were found non-robust or poorly converging and replaced by more robust solutions, demonstrated in the paper. The measurement results of different technologies and different devices compared with a simulation using the new behavioral model are presented as the model validation. The comparison of model validation in time and frequency domains demonstrates that the implemented reverse recovery effect with correctly extracted parameters improves the model simulation results not only in switching from ON to OFF state, which is often published, but also its impedance/admittance frequency dependency in GHz range. Finally the model parameter extraction and the comparison with SPICE compact models containing reverse recovery effect is presented.

  9. FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER PHYSICAL SYSTEMS

    DTIC Science & Technology

    2018-02-23

    FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL SYSTEMS UNIVERSITY OF TEXAS AT ARLINGTON FEBRUARY 2018 FINAL...COVERED (From - To) APR 2015 – APR 2017 4. TITLE AND SUBTITLE FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL ...dated 16 Jan 09 13. SUPPLEMENTARY NOTES 14. ABSTRACT This project studied emergent behavior in distributed cyber- physical systems (DCPS). Emergent

  10. Accurate Emission Line Diagnostics at High Redshift

    NASA Astrophysics Data System (ADS)

    Jones, Tucker

    2017-08-01

    How do the physical conditions of high redshift galaxies differ from those seen locally? Spectroscopic surveys have invested hundreds of nights of 8- and 10-meter telescope time as well as hundreds of Hubble orbits to study evolution in the galaxy population at redshifts z 0.5-4 using rest-frame optical strong emission line diagnostics. These surveys reveal evolution in the gas excitation with redshift but the physical cause is not yet understood. Consequently there are large systematic errors in derived quantities such as metallicity.We have used direct measurements of gas density, temperature, and metallicity in a unique sample at z=0.8 to determine reliable diagnostics for high redshift galaxies. Our measurements suggest that offsets in emission line ratios at high redshift are primarily caused by high N/O abundance ratios. However, our ground-based data cannot rule out other interpretations. Spatially resolved Hubble grism spectra are needed to distinguish between the remaining plausible causes such as active nuclei, shocks, diffuse ionized gas emission, and HII regions with escaping ionizing flux. Identifying the physical origin of evolving excitation will allow us to build the necessary foundation for accurate measurements of metallicity and other properties of high redshift galaxies. Only then can we expoit the wealth of data from current surveys and near-future JWST spectroscopy to understand how galaxies evolve over time.

  11. Active lifestyles in older adults: an integrated predictive model of physical activity and exercise

    PubMed Central

    Galli, Federica; Chirico, Andrea; Mallia, Luca; Girelli, Laura; De Laurentiis, Michelino; Lucidi, Fabio; Giordano, Antonio; Botti, Gerardo

    2018-01-01

    Physical activity and exercise have been identified as behaviors to preserve physical and mental health in older adults. The aim of the present study was to test the Integrated Behavior Change model in exercise and physical activity behaviors. The study evaluated two different samples of older adults: the first engaged in exercise class, the second doing spontaneous physical activity. The key analyses relied on Variance-Based Structural Modeling, which were performed by means of WARP PLS 6.0 statistical software. The analyses estimated the Integrated Behavior Change model in predicting exercise and physical activity, in a longitudinal design across two months of assessment. The tested models exhibited a good fit with the observed data derived from the model focusing on exercise, as well as with those derived from the model focusing on physical activity. Results showed, also, some effects and relations specific to each behavioral context. Results may form a starting point for future experimental and intervention research. PMID:29875997

  12. Characterising molecules for fundamental physics: an accurate spectroscopic model of methyltrioxorhenium derived from new infrared and millimetre-wave measurements.

    PubMed

    Asselin, Pierre; Berger, Yann; Huet, Thérèse R; Margulès, Laurent; Motiyenko, Roman; Hendricks, Richard J; Tarbutt, Michael R; Tokunaga, Sean K; Darquié, Benoît

    2017-02-08

    Precise spectroscopic analysis of polyatomic molecules enables many striking advances in physical chemistry and fundamental physics. We use several new high-resolution spectroscopic devices to improve our understanding of the rotational and rovibrational structure of methyltrioxorhenium (MTO), the achiral parent of a family of large oxorhenium compounds that are ideal candidate species for a planned measurement of parity violation in chiral molecules. Using millimetre-wave and infrared spectroscopy in a pulsed supersonic jet, a cryogenic buffer gas cell, and room temperature absorption cells, we probe the ground state and the Re[double bond, length as m-dash]O antisymmetric and symmetric stretching excited states of both CH 3 187 ReO 3 and CH 3 185 ReO 3 isotopologues in the gas phase with unprecedented precision. By extending the rotational spectra to the 150-300 GHz range, we characterize the ground state rotational and hyperfine structure up to J = 43 and K = 41, resulting in refinements to the rotational, quartic and hyperfine parameters, and the determination of sextic parameters and a centrifugal distortion correction to the quadrupolar hyperfine constant. We obtain rovibrational data for temperatures between 6 and 300 K in the 970-1015 cm -1 range, at resolutions down to 8 MHz and accuracies of 30 MHz. We use these data to determine more precise excited-state rotational, Coriolis and quartic parameters, as well as the ground-state centrifugal distortion parameter D K of the 187 Re isotopologue. We also account for hyperfine structure in the rovibrational transitions and hence determine the upper state rhenium atom quadrupole coupling constant eQq'.

  13. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  14. Bell's Inequality: Revolution in Quantum Physics or Just AN Inadequate Mathematical Model?

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    The main aim of this review is to stress the role of mathematical models in physics. The Bell inequality (BI) is often called the "most famous inequality of the 20th century." It is commonly accepted that its violation in corresponding experiments induced a revolution in quantum physics. Unlike "old quantum mechanics" (of Einstein, Schrodinger Bohr, Heisenberg, Pauli, Landau, Fock), "modern quantum mechanics" (of Bell, Aspect, Zeilinger, Shimony, Green-berger, Gisin, Mermin) takes seriously so called quantum non-locality. We will show that the conclusion that one has to give up the realism (i.e., a possibility to assign results of measurements to physical systems) or the locality (i.e., to assume action at a distance) is heavily based on one special mathematical model. This model was invented by A. N. Kolmogorov in 1933. One should pay serious attention to the role of mathematical models in physics. The problems of the realism and locality induced by Bell's argument can be solved by using non-Kolmogorovian probabilistic models. We compare this situation with non-Euclidean geometric models in relativity theory.

  15. Causal modeling of secondary science students' intentions to enroll in physics

    NASA Astrophysics Data System (ADS)

    Crawley, Frank E.; Black, Carolyn B.

    The purpose of this study was to explore the utility of the theory of planned behavior model developed by social psychologists for understanding and predicting the behavioral intentions of secondary science students regarding enrolling in physics. In particular, the study used a three-stage causal model to investigate the links from external variables to behavioral, normative, and control beliefs; from beliefs to attitudes, subjective norm, and perceived behavioral control; and from attitudes, subjective norm, and perceived behavioral control to behavioral intentions. The causal modeling method was employed to verify the underlying causes of secondary science students' interest in enrolling physics as predicted in the theory of planned behavior. Data were collected from secondary science students (N = 264) residing in a central Texas city who were enrolled in earth science (8th grade), biology (9th grade), physical science (10th grade), or chemistry (11th grade) courses. Cause-and-effect relationships were analyzed using path analysis to test the direct effects of model variables specified in the theory of planned behavior. Results of this study indicated that students' intention to enroll in a high school physics course was determined by their attitude toward enrollment and their degree of perceived behavioral control. Attitude, subjective norm, and perceived behavioral control were, in turn, formed as a result of specific beliefs that students held about enrolling in physics. Grade level and career goals were found to be instrumental in shaping students' attitude. Immediate family members were identified as major referents in the social support system for enrolling in physics. Course and extracurricular conflicts and the fear of failure were shown to be the primary beliefs obstructing students' perception of control over physics enrollment. Specific recommendations are offered to researchers and practitioners for strengthening secondary school students

  16. Accurate quantification of fluorescent targets within turbid media based on a decoupled fluorescence Monte Carlo model.

    PubMed

    Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming

    2015-07-01

    We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.

  17. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  18. Physical characteristics of shrub and conifer fuels for fire behavior models

    Treesearch

    Jonathan R. Gallacher; Thomas H. Fletcher; Victoria Lansinger; Sydney Hansen; Taylor Ellsworth; David R. Weise

    2017-01-01

    The physical properties and dimensions of foliage are necessary inputs for some fire spread models. Currently, almost no data exist on these plant characteristics to fill this need. In this report, we measured the physical properties and dimensions of the foliage from 10 live shrub and conifer fuels throughout a 1-year period. We developed models to predict relative...

  19. Hunting Solomonoff's Swans: Exploring the Boundary Between Physics and Statistics in Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.

    2014-12-01

    Statistical models consistently out-perform conceptual models in the short term, however to account for a nonstationary future (or an unobserved past) scientists prefer to base predictions on unchanging and commutable properties of the universe - i.e., physics. The problem with physically-based hydrology models is, of course, that they aren't really based on physics - they are based on statistical approximations of physical interactions, and we almost uniformly lack an understanding of the entropy associated with these approximations. Thermodynamics is successful precisely because entropy statistics are computable for homogeneous (well-mixed) systems, and ergodic arguments explain the success of Newton's laws to describe systems that are fundamentally quantum in nature. Unfortunately, similar arguments do not hold for systems like watersheds that are heterogeneous at a wide range of scales. Ray Solomonoff formalized the situation in 1968 by showing that given infinite evidence, simultaneously minimizing model complexity and entropy in predictions always leads to the best possible model. The open question in hydrology is about what happens when we don't have infinite evidence - for example, when the future will not look like the past, or when one watershed does not behave like another. How do we isolate stationary and commutable components of watershed behavior? I propose that one possible answer to this dilemma lies in a formal combination of physics and statistics. In this talk I outline my recent analogue (Solomonoff's theorem was digital) of Solomonoff's idea that allows us to quantify the complexity/entropy tradeoff in a way that is intuitive to physical scientists. I show how to formally combine "physical" and statistical methods for model development in a way that allows us to derive the theoretically best possible model given any given physics approximation(s) and available observations. Finally, I apply an analogue of Solomonoff's theorem to evaluate the

  20. Automated generation of quantum-accurate classical interatomic potentials for metals and semiconductors

    NASA Astrophysics Data System (ADS)

    Thompson, Aidan; Foiles, Stephen; Schultz, Peter; Swiler, Laura; Trott, Christian; Tucker, Garritt

    2013-03-01

    Molecular dynamics (MD) is a powerful condensed matter simulation tool for bridging between macroscopic continuum models and quantum models (QM) treating a few hundred atoms, but is limited by the accuracy of available interatomic potentials. Sound physical and chemical understanding of these interactions have resulted in a variety of concise potentials for certain systems, but it is difficult to extend them to new materials and properties. The growing availability of large QM data sets has made it possible to use more automated machine-learning approaches. Bartók et al. demonstrated that the bispectrum of the local neighbor density provides good regression surrogates for QM models. We adopt a similar bispectrum representation within a linear regression scheme. We have produced potentials for silicon and tantalum, and we are currently extending the method to III-V compounds. Results will be presented demonstrating the accuracy of these potentials relative to the training data, as well as their ability to accurately predict material properties not explicitly included in the training data. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Dept. of Energy Nat. Nuclear Security Admin. under Contract DE-AC04-94AL85000.

  1. Soil physics: a Moroccan perspective

    NASA Astrophysics Data System (ADS)

    Lahlou, Sabah; Mrabet, Rachid; Ouadia, Mohamed

    2004-06-01

    Research on environmental pollution and degradation of soil and water resources is now of highest priority worldwide. To address these problems, soil physics should be conceived as a central core to this research. This paper objectives are to: (1) address the role and importance of soil physics, (2) demonstrate progress in this discipline, and (3) present various uses of soil physics in research, environment and industry. The study of dynamic processes at and within the soil vadose zone (flow, dispersion, transport, sedimentation, etc.), and ephemeral phenomena (deformation, compaction, etc.), form an area of particular interest in soil physics. Soil physics has changed considerably over time. These changes are due to needed precision in data collection for accurate interpretation of space and time variation of soil properties. Soil physics interacts with other disciplines and sciences such as hydro(geo)logy, agronomy, environment, micro-meteorology, pedology, mathematics, physics, water sciences, etc. These interactions prompted the emergence of advanced theories and comprehensive mechanisms of most natural processes, development of new mathematical tools (modeling and computer simulation, fractals, geostatistics, transformations), creation of high precision instrumentation (computer assisted, less time constraint, increased number of measured parameters) and the scale sharpening of physical measurements which ranges from micro to watershed. The environment industry has contributed to an enlargement of many facets of soil physics. In other words, research demand in soil physics has increased considerably to satisfy specific and environmental problems (contamination of water resources, global warming, etc.). Soil physics research is still at an embryonic stage in Morocco. Consequently, soil physicists can take advantage of developments occurring overseas, and need to build up a database of soil static and dynamic properties and to revise developed models to meet

  2. A physical model for low-frequency electromagnetic induction in the near field based on direct interaction between transmitter and receiver electrons

    PubMed Central

    Smith, Ray T.; Jjunju, Fred P. M.; Young, Iain S.; Taylor, Stephen

    2016-01-01

    A physical model of electromagnetic induction is developed which relates directly the forces between electrons in the transmitter and receiver windings of concentric coaxial finite coils in the near-field region. By applying the principle of superposition, the contributions from accelerating electrons in successive current loops are summed, allowing the peak-induced voltage in the receiver to be accurately predicted. Results show good agreement between theory and experiment for various receivers of different radii up to five times that of the transmitter. The limitations of the linear theory of electromagnetic induction are discussed in terms of the non-uniform current distribution caused by the skin effect. In particular, the explanation in terms of electromagnetic energy and Poynting’s theorem is contrasted with a more direct explanation based on variable filament induction across the conductor cross section. As the direct physical model developed herein deals only with forces between discrete current elements, it can be readily adapted to suit different coil geometries and is widely applicable in various fields of research such as near-field communications, antenna design, wireless power transfer, sensor applications and beyond. PMID:27493580

  3. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  4. SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services.

    PubMed

    Fahim, Muhammad; Lee, Sungyoung; Yoon, Yongik

    2014-01-01

    Current generation smartphone can be seen as one of the most ubiquitous device for physical activity recognition. In this paper we proposed a physical activity recognizer to provide u-healthcare services in a cost effective manner by utilizing cloud computing infrastructure. Our model is comprised on embedded triaxial accelerometer of the smartphone to sense the body movements and a cloud server to store and process the sensory data for numerous kind of services. We compute the time and frequency domain features over the raw signals and evaluate different machine learning algorithms to identify an accurate activity recognition model for four kinds of physical activities (i.e., walking, running, cycling and hopping). During our experiments we found Support Vector Machine (SVM) algorithm outperforms for the aforementioned physical activities as compared to its counterparts. Furthermore, we also explain how smartphone application and cloud server communicate with each other.

  5. The effects of modeling instruction on high school physics academic achievement

    NASA Astrophysics Data System (ADS)

    Wright, Tiffanie L.

    The purpose of this study was to explore whether Modeling Instruction, compared to traditional lecturing, is an effective instructional method to promote academic achievement in selected high school physics classes at a rural middle Tennessee high school. This study used an ex post facto , quasi-experimental research methodology. The independent variables in this study were the instructional methods of teaching. The treatment variable was Modeling Instruction and the control variable was traditional lecture instruction. The Treatment Group consisted of participants in Physical World Concepts who received Modeling Instruction. The Control Group consisted of participants in Physical Science who received traditional lecture instruction. The dependent variable was gains scores on the Force Concepts Inventory (FCI). The participants for this study were 133 students each in both the Treatment and Control Groups (n = 266), who attended a public, high school in rural middle Tennessee. The participants were administered the Force Concepts Inventory (FCI) prior to being taught the mechanics of physics. The FCI data were entered into the computer-based Statistical Package for the Social Science (SPSS). Two independent samples t-tests were conducted to answer the research questions. There was a statistically significant difference between the treatment and control groups concerning the instructional method. Modeling Instructional methods were found to be effective in increasing the academic achievement of students in high school physics. There was no statistically significant difference between FCI gains scores for gender. Gender was found to have no effect on the academic achievement of students in high school physics classes. However, even though there was not a statistically significant difference, female students' gains scores were higher than male students' gains scores when Modeling Instructional methods of teaching were used. Based on these findings, it is recommended

  6. An Application of the Trans-Contextual Model of Motivation in Elementary School Physical Education

    ERIC Educational Resources Information Center

    Ntovolis, Yannis; Barkoukis, Vassilis; Michelinakis, Evaggelos; Tsorbatzoudis, Haralambos

    2015-01-01

    Elementary school physical education can play a prominent role in promoting children's leisure-time physical activity. The trans-contextual model of motivation has been proven effective in describing the process through which school physical education can affect students' leisure-time physical activity. This model has been tested in secondary…

  7. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  8. PREFACE: Physics-Based Mathematical Models for Nanotechnology

    NASA Astrophysics Data System (ADS)

    Voon, Lok C. Lew Yan; Melnik, Roderick; Willatzen, Morten

    2008-03-01

    in the cross-disciplinary research area: low-dimensional semiconductor nanostructures. Since the main properties of two-dimensional heterostructures (such as quantum wells) are now quite well understood, there has been a consistently growing interest in the mathematical physics community to further dimensionality reduction of semiconductor structures. Experimental achievements in realizing one-dimensional and quasi-zero-dimensional heterostructures have opened new opportunities for theory and applications of such low-dimensional semiconductor nanostructures. One of the most important implications of this process has been a critical re-examining of assumptions under which traditional quantum mechanical models have been derived in this field. Indeed, the formation of LDSNs, in particular quantum dots, is a competition between the surface energy in the structure and strain energy. However, current models for bandstructure calculations use quite a simplified analysis of strain relaxation effects, although such effects are in the heart of nanostructure formation. By now, it has been understood that traditional models in this field may not be adequate for modeling realistic objects based on LDSNs due to neglecting many effects that may profoundly influence optoelectronic properties of the nanostructures. Among such effects are electromechanical effects, including strain relaxation, piezoelectric effect, spontaneous polarization, and higher order nonlinear effects. Up to date, major efforts have been concentrated on the analysis of idealized, isolated quantum dots, while a typical self-assembled semiconductor quantum dot nanostructure is an array (or a molecule) of many individual quantum dots sitting on the same `substrate' known as the wetting layer. Each such dot contains several hundred thousand atoms. In order to account for quantum effects accurately in a situation like that, attempts can be made to apply ab initio or atomistic methodologies, but then one would face a

  9. Fundamental Physics with Antihydrogen

    NASA Astrophysics Data System (ADS)

    Hangst, J. S.

    Antihydrogen—the antimatter equivalent of the hydrogen atom—is of fundamental interest as a test bed for universal symmetries—such as CPT and the Weak Equivalence Principle for gravitation. Invariance under CPT requires that hydrogen and antihydrogen have the same spectrum. Antimatter is of course intriguing because of the observed baryon asymmetry in the universe—currently unexplained by the Standard Model. At the CERN Antiproton Decelerator (AD) [1], several groups have been working diligently since 1999 to produce, trap, and study the structure and behaviour of the antihydrogen atom. One of the main thrusts of the AD experimental program is to apply precision techniques from atomic physics to the study of antimatter. Such experiments complement the high-energy searches for physics beyond the Standard Model. Antihydrogen is the only atom of antimatter to be produced in the laboratory. This is not so unfortunate, as its matter equivalent, hydrogen, is one of the most well-understood and accurately measured systems in all of physics. It is thus very compelling to undertake experimental examinations of the structure of antihydrogen. As experimental spectroscopy of antihydrogen has yet to begin in earnest, I will give here a brief introduction to some of the ion and atom trap developments necessary for synthesizing and trapping antihydrogen, so that it can be studied.

  10. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  11. Accurate modeling of plasma acceleration with arbitrary order pseudo-spectral particle-in-cell methods

    DOE PAGES

    Jalas, S.; Dornmair, I.; Lehe, R.; ...

    2017-03-20

    Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less

  12. Physics and financial economics (1776-2014): puzzles, Ising and agent-based models.

    PubMed

    Sornette, Didier

    2014-06-01

    This short review presents a selected history of the mutual fertilization between physics and economics--from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the 'Emerging Intelligence Market Hypothesis' to reconcile the pervasive presence of 'noise traders' with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.

  13. Physics and financial economics (1776-2014): puzzles, Ising and agent-based models

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    2014-06-01

    This short review presents a selected history of the mutual fertilization between physics and economics—from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the ‘Emerging Intelligence Market Hypothesis’ to reconcile the pervasive presence of ‘noise traders’ with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.

  14. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far

  15. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    PubMed Central

    Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  16. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    PubMed

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.

  17. A generic framework for individual-based modelling and physical-biological interaction

    PubMed Central

    2018-01-01

    The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280

  18. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  19. Coincidental match of numerical simulation and physics

    NASA Astrophysics Data System (ADS)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  20. Individual Differences in Boys' and Girls' Timing and Tempo of Puberty: Modeling Development with Nonlinear Growth Models

    ERIC Educational Resources Information Center

    Marceau, Kristine; Ram, Nilam; Houts, Renate M.; Grimm, Kevin J.; Susman, Elizabeth J.

    2011-01-01

    Pubertal development is a nonlinear process progressing from prepubescent beginnings through biological, physical, and psychological changes to full sexual maturity. To tether theoretical concepts of puberty with sophisticated longitudinal, analytical models capable of articulating pubertal development more accurately, we used nonlinear…

  1. A simple physical model for forest fire spread

    Treesearch

    E. Koo; P. Pagni; J. Woycheese; S. Stephens; D. Weise; J. Huff

    2005-01-01

    Based on energy conservation and detailed heat transfer mechanisms, a simple physical model for fire spread is presented for the limit of one-dimensional steady-state contiguous spread of a line fire in a thermally-thin uniform porous fuel bed. The solution for the fire spread rate is found as an eigenvalue from this model with appropriate boundary conditions through a...

  2. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  3. Characterizing, modeling, and addressing gender disparities in introductory college physics

    NASA Astrophysics Data System (ADS)

    Kost-Smith, Lauren Elizabeth

    2011-12-01

    The underrepresentation and underperformance of females in physics has been well documented and has long concerned policy-makers, educators, and the physics community. In this thesis, we focus on gender disparities in the first- and second-semester introductory, calculus-based physics courses at the University of Colorado. Success in these courses is critical for future study and careers in physics (and other sciences). Using data gathered from roughly 10,000 undergraduate students, we identify and model gender differences in the introductory physics courses in three areas: student performance, retention, and psychological factors. We observe gender differences on several measures in the introductory physics courses: females are less likely to take a high school physics course than males and have lower standardized mathematics test scores; males outscore females on both pre- and post-course conceptual physics surveys and in-class exams; and males have more expert-like attitudes and beliefs about physics than females. These background differences of males and females account for 60% to 70% of the gender gap that we observe on a post-course survey of conceptual physics understanding. In analyzing underlying psychological factors of learning, we find that female students report lower self-confidence related to succeeding in the introductory courses (self-efficacy) and are less likely to report seeing themselves as a "physics person". Students' self-efficacy beliefs are significant predictors of their performance, even when measures of physics and mathematics background are controlled, and account for an additional 10% of the gender gap. Informed by results from these studies, we implemented and tested a psychological, self-affirmation intervention aimed at enhancing female students' performance in Physics 1. Self-affirmation reduced the gender gap in performance on both in-class exams and the post-course conceptual physics survey. Further, the benefit of the self

  4. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  5. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  6. Problem Solving: Physics Modeling-Based Interactive Engagement

    ERIC Educational Resources Information Center

    Ornek, Funda

    2009-01-01

    The purpose of this study was to investigate how modeling-based instruction combined with an interactive-engagement teaching approach promotes students' problem solving abilities. I focused on students in a calculus-based introductory physics course, based on the matter and interactions curriculum of Chabay & Sherwood (2002) at a large state…

  7. Modeling discourse management compared to other classroom management styles in university physics

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain Michael

    2002-01-01

    A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.

  8. Graphene growth process modeling: a physical-statistical approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  9. Modelling the hydraulic conductivity of porous media using physical-statistical model

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Usowicz, L. B.; Lipiec, J.

    2009-04-01

    Soils and other porous media can be represented by a pattern (net) of more or less cylindrically interconnected channels. The capillary radius, r can represent an elementary capillary formed in between soil particles in one case, and in another case it can represent a mean hydrodynamic radius. When we view a porous medium as a net of interconnected capillaries, we can apply a statistical approach for the description of the liquid or gas flow. A soil phase is included in the porous medium and its configuration is decisive for pore distribution in this medium and hence, it conditions the course of the water retention curve of this medium. In this work method of estimating hydraulic conductivity of porous media based on physical-statistical model proposed by B. Usowicz is presented. The physical-statistical model considers the pore space as the capillary net. The net of capillary connections is represented by parallel and serial connections of hydraulic resistors in the layer and between the layers, respectively. The polynomial distribution was used in this model to determine probability of the occurrence of a given capillary configuration. The model was calibrated using measured water retention curve and two values of hydraulic conductivity saturated and unsaturated and model parameters were determined. The model was used for predicting hydraulic conductivity as a function of soil water content K(theta). The model was validated by comparing the measured and predicted K data for various soils and other porous media (e.g. sandstone). A good agreement between measured and predicted data was reasonable as indicated by values R2 (>0.9). It was also confirmed that the random variables used for the calculations and model parameters were chosen and selected correctly. The study was funded in part by the Polish Ministry of Science and Higher Education by Grant No. N305 046 31/1707).

  10. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  11. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  12. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral

  13. A knowledge-based potential with an accurate description of local interactions improves discrimination between native and near-native protein conformations.

    PubMed

    Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco

    2007-01-01

    The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.

  14. Kinetic exchange models: From molecular physics to social science

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban

    2013-08-01

    We discuss several multi-agent models that have their origin in the kinetic exchange theory of statistical mechanics and have been recently applied to a variety of problems in the social sciences. This class of models can be easily adapted for simulations in areas other than physics, such as the modeling of income and wealth distributions in economics and opinion dynamics in sociology.

  15. The Mathematics of High School Physics: Models, Symbols, Algorithmic Operations and Meaning

    ERIC Educational Resources Information Center

    Kanderakis, Nikos

    2016-01-01

    In the seventeenth and eighteenth centuries, mathematicians and physical philosophers managed to study, via mathematics, various physical systems of the sublunar world through idealized and simplified models of these systems, constructed with the help of geometry. By analyzing these models, they were able to formulate new concepts, laws and…

  16. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  17. Guided-Inquiry Experiments for Physical Chemistry: The POGIL-PCL Model

    ERIC Educational Resources Information Center

    Hunnicutt, Sally S.; Grushow, Alexander; Whitnell, Robert

    2015-01-01

    The POGIL-PCL project implements the principles of process-oriented, guided-inquiry learning (POGIL) in order to improve student learning in the physical chemistry laboratory (PCL) course. The inquiry-based physical chemistry experiments being developed emphasize modeling of chemical phenomena. In each experiment, students work through at least…

  18. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    NASA Astrophysics Data System (ADS)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  19. Bifactor Approach to Modeling Multidimensionality of Physical Self-Perception Profile

    ERIC Educational Resources Information Center

    Chung, ChihMing; Liao, Xiaolan; Song, Hairong; Lee, Taehun

    2016-01-01

    The multi-dimensionality of Physical Self-Perception Profile (PSPP) has been acknowledged by the use of correlated-factor model and second-order model. In this study, the authors critically endorse the bifactor model, as a substitute to address the multi-dimensionality of PSPP. To cross-validate the models, analyses are conducted first in…

  20. Agent-Based Models in Social Physics

    NASA Astrophysics Data System (ADS)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.