NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.
Plant uptake of elements in soil and pore water: field observations versus model assumptions.
Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo
2013-09-15
Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
NASA Astrophysics Data System (ADS)
Fontaine, G.; Dufour, P.; Chayer, P.; Dupuis, J.; Brassard, P.
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Inferences on the process based on diffusion timescale arguments make the implicit assumption that the concentration gradient of a given metal at the base of the convection zone is negligible. This assumption is, in fact, not rigorously valid, but it allows the decoupling of the surface abundance from the evolving distribution of a given metal in deeper layers. A better approach is a full time-dependent calculation of the evolution of the abundance profile of an accreting-diffusing element. We used the same approach as that developed by Dupuis et al. to model accretion episodes involving many more elements than those considered by these authors. Our calculations incorporate the improvements to diffusion physics mentioned in Paper I. The basic assumption in the Dupuis et al. approach is that the accreted metals are trace elements, i.e, that they have no effects on the background (DA or non-DA) stellar structure. This allows us to consider an arbitrary number of accreting elements.
The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.
Ene, Florentina; Delassus, Patrick; Morris, Liam
2014-08-01
The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.
Masterlark, Timothy
2003-01-01
Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.
Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling
NASA Astrophysics Data System (ADS)
Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.
2018-05-01
Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Quantification of colloidal and aqueous element transfer in soils: The dual-phase mass balance model
Bern, Carleton R.; Thompson, Aaron; Chadwick, Oliver A.
2015-01-01
Mass balance models have become standard tools for characterizing element gains and losses and volumetric change during weathering and soil development. However, they rely on the assumption of complete immobility for an index element such as Ti or Zr. Here we describe a dual-phase mass balance model that eliminates the need for an assumption of immobility and in the process quantifies the contribution of aqueous versus colloidal element transfer. In the model, the high field strength elements Ti and Zr are assumed to be mobile only as suspended solids (colloids) and can therefore be used to distinguish elemental redistribution via colloids from redistribution via dissolved aqueous solutes. Calculations are based upon element concentrations in soil, parent material, and colloids dispersed from soil in the laboratory. We illustrate the utility of this model using a catena in South Africa. Traditional mass balance models systematically distort elemental gains and losses and changes in soil volume in this catena due to significant redistribution of Zr-bearing colloids. Applying the dual-phase model accounts for this colloidal redistribution and we find that the process accounts for a substantial portion of the major element (e.g., Al, Fe and Si) loss from eluvial soil. In addition, we find that in illuvial soils along this catena, gains of colloidal material significantly offset aqueous elemental loss. In other settings, processes such as accumulation of exogenous dust can mimic the geochemical effects of colloid redistribution and we suggest strategies for distinguishing between the two. The movement of clays and colloidal material is a major process in weathering and pedogenesis; the mass balance model presented here is a tool for quantifying effects of that process over time scales of soil development.
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
Advance finite element modeling of rotor blade aeroelasticity
NASA Technical Reports Server (NTRS)
Straub, F. K.; Sangha, K. B.; Panda, B.
1994-01-01
An advanced beam finite element has been developed for modeling rotor blade dynamics and aeroelasticity. This element is part of the Element Library of the Second Generation Comprehensive Helicopter Analysis System (2GCHAS). The element allows modeling of arbitrary rotor systems, including bearingless rotors. It accounts for moderately large elastic deflections, anisotropic properties, large frame motion for maneuver simulation, and allows for variable order shape functions. The effects of gravity, mechanically applied and aerodynamic loads are included. All kinematic quantities required to compute airloads are provided. In this paper, the fundamental assumptions and derivation of the element matrices are presented. Numerical results are shown to verify the formulation and illustrate several features of the element.
A case for poroelasticity in skeletal muscle finite element analysis: experiment and modeling.
Wheatley, Benjamin B; Odegard, Gregory M; Kaufman, Kenton R; Haut Donahue, Tammy L
2017-05-01
Finite element models of skeletal muscle typically ignore the biphasic nature of the tissue, associating any time dependence with a viscoelastic formulation. In this study, direct experimental measurement of permeability was conducted as a function of specimen orientation and strain. A finite element model was developed to identify how various permeability formulations affect compressive response of the tissue. Experimental and modeling results suggest the assumption of a constant, isotropic permeability is appropriate. A viscoelastic only model differed considerably from a visco-poroelastic model, suggesting the latter is more appropriate for compressive studies.
Modeling and Control of Intelligent Flexible Structures
1994-03-26
can be approximated as a simply supported beam in transverse vibration. Assuming that the Euler- Bernoulli beam assumptions hold, linear equations of...The assumptions made during the derivation are that the element can be modeled as an Euler- Bernoulli beam, that the cross-section is symmetric, and...parametes A,. and ,%. andc input maces 3,,. The closed loop system. ecuation (7), is stable when the 3.. 8 and output gain mantices H1., H., H. for
Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.
1998-01-01
The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.
Brand, Richard A; Stanford, Clark M; Swan, Colby C
2003-01-01
Joint implant design clearly affects long-term outcome. While many implant designs have been empirically-based, finite element analysis has the potential to identify beneficial and deleterious features prior to clinical trials. Finite element analysis is a powerful analytic tool allowing computation of the stress and strain distribution throughout an implant construct. Whether it is useful depends upon many assumptions and details of the model. Since ultimate failure is related to biological factors in addition to mechanical, and since the mechanical causes of failure are related to load history, rather than a few loading conditions, chief among them is whether the stresses or strains under limited loading conditions relate to outcome. Newer approaches can minimize this and the many other model limitations. If the surgeon is to critically and properly interpret the results in scientific articles and sales literature, he or she must have a fundamental understanding of finite element analysis. We outline here the major capabilities of finite element analysis, as well as the assumptions and limitations. PMID:14575244
CZAEM USER'S GUIDE: MODELING CAPTURE ZONES OF GROUND-WATER WELLS USING ANALYTIC ELEMENTS
The computer program CZAEM is designed for elementary capture zone analysis, and is based on the analytic element method. CZAEM is applicable to confined and/or unconfined low in shallow aquifers; the Dupuit-Forchheimer assumption is adopted. CZAEM supports the following analyt...
NASA Astrophysics Data System (ADS)
Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.
2012-12-01
Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.
ERIC Educational Resources Information Center
Slisko, Josip; Cruz, Adrian Corona
2013-01-01
There is a general agreement that critical thinking is an important element of 21st century skills. Although critical thinking is a very complex and controversial conception, many would accept that recognition and evaluation of assumptions is a basic critical-thinking process. When students use simple mathematical model to reason quantitatively…
ERIC Educational Resources Information Center
Galindo, Gabriel E.; Peterson, Sean D.; Erath, Byron D.; Castro, Christian; Hillman, Robert E.; Zañartu, Matías
2017-01-01
Purpose: Our goal was to test prevailing assumptions about the underlying biomechanical and aeroacoustic mechanisms associated with phonotraumatic lesions of the vocal folds using a numerical lumped-element model of voice production. Method: A numerical model with a triangular glottis, posterior glottal opening, and arytenoid posturing is…
MODELING MULTICOMPONENT ORGANIC CHEMICAL TRANSPORT IN THREE-FLUID-PHASE POROUS MEDIA
A two dimensional finite-element model was developed to predict coupled transient flow and multicomponent transport of organic chemicals which can partition between NAPL, water, gas and solid phases in porous media under the assumption of local chemical equilibrium. as-phase pres...
MODELING MULTICOMPONENT ORGANIC CHEMICAL TRANSPORT IN THREE FLUID PHASE POROUS MEDIA
A two-dimensional finite-element model was developed to predict coupled transient flow and multicomponent transport of organic chemicals which can partition between nonaqueous phase liquid, water, gas and solid phases in porous media under the assumption of local chemical equilib...
A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame
NASA Astrophysics Data System (ADS)
Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.
2015-01-01
Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public release, LA-UR-13-27561).
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
NASA Technical Reports Server (NTRS)
Kenigsberg, I. J.; Dean, M. W.; Malatino, R.
1974-01-01
The correlation achieved with each program provides the material for a discussion of modeling techniques developed for general application to finite-element dynamic analyses of helicopter airframes. Included are the selection of static and dynamic degrees of freedom, cockpit structural modeling, and the extent of flexible-frame modeling in the transmission support region and in the vicinity of large cut-outs. The sensitivity of predicted results to these modeling assumptions are discussed. Both the Sikorsky Finite-Element Airframe Vibration analysis Program (FRAN/Vibration Analysis) and the NASA Structural Analysis Program (NASTRAN) have been correlated with data taken in full-scale vibration tests of a modified CH-53A helicopter.
Continuity properties of the semi-group and its integral kernel in non-relativistic QED
NASA Astrophysics Data System (ADS)
Matte, Oliver
2016-07-01
Employing recent results on stochastic differential equations associated with the standard model of non-relativistic quantum electrodynamics by B. Güneysu, J. S. Møller, and the present author, we study the continuity of the corresponding semi-group between weighted vector-valued Lp-spaces, continuity properties of elements in the range of the semi-group, and the pointwise continuity of an operator-valued semi-group kernel. We further discuss the continuous dependence of the semi-group and its integral kernel on model parameters. All these results are obtained for Kato decomposable electrostatic potentials and the actual assumptions on the model are general enough to cover the Nelson model as well. As a corollary, we obtain some new pointwise exponential decay and continuity results on elements of low-energetic spectral subspaces of atoms or molecules that also take spin into account. In a simpler situation where spin is neglected, we explain how to verify the joint continuity of positive ground state eigenvectors with respect to spatial coordinates and model parameters. There are no smallness assumptions imposed on any model parameter.
ERIC Educational Resources Information Center
Eisenkraft, Arthur
2003-01-01
Amends the current 5E learning cycle and instructional model to a 7E model. Changes ensure that instructors do not omit crucial elements for learning from their lessons while under the incorrect assumption that they are meeting the requirements of the learning cycle. The proposed 7E model includes: (1) engage; (2) explore; (3) explain; (4) elicit;…
The Impact of Modeling Assumptions in Galactic Chemical Evolution Models
NASA Astrophysics Data System (ADS)
Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.
2017-02-01
We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA is part of the NuGrid chemical evolution package and is publicly available online at http://nugrid.github.io/NuPyCEE.
Rolling Element Bearing Stiffness Matrix Determination (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parker, R.
2014-01-01
Current theoretical bearing models differ in their stiffness estimates because of different model assumptions. In this study, a finite element/contact mechanics model is developed for rolling element bearings with the focus of obtaining accurate bearing stiffness for a wide range of bearing types and parameters. A combined surface integral and finite element method is used to solve for the contact mechanics between the rolling elements and races. This model captures the time-dependent characteristics of the bearing contact due to the orbital motion of the rolling elements. A numerical method is developed to determine the full bearing stiffness matrix corresponding tomore » two radial, one axial, and two angular coordinates; the rotation about the shaft axis is free by design. This proposed stiffness determination method is validated against experiments in the literature and compared to existing analytical models and widely used advanced computational methods. The fully-populated stiffness matrix demonstrates the coupling between bearing radial, axial, and tilting bearing deflections.« less
1984-12-30
as three dimensional, when the assumption is made that all SUTRA parameters and coefficients have a constant value in the third space direction. A...finite element. The type of element employed by SUTRA for two-dimensional simulation is a quadrilateral which has a finite thickness in the third ... space dimension. This type of a quad- rilateral element and a typical two-dimensional mesh is shown in Figure 3.1. - All twelve edges of the two
NASA Astrophysics Data System (ADS)
Landahl, M. T.
1984-08-01
The fundamental ideas behind Prandtl's famous mixing length theory are discussed in the light of newer findings from experimental and theoretical research on coherent turbulence structures in the region near solid walls. A simple theoretical model for 'flat' structures is used to examine the fundamental assumptions behind Prandtl's theory. The model is validated by comparisons with conditionally sampled velocity data obtained in recent channel flow experiments. Particular attention is given to the role of pressure fluctuations on the evolution of flat eddies. The validity of Prandtl's assumption that an element of fluid retains its streamwise momentum as it is moved around by turbulence is confirmed for flat eddies. It is demonstrated that spanwise pressure gradients give rise to a contribution to the vertical displacement of a fluid element which is proportional to the distance from the wall. This contribution is particularly important for eddies that are highly elongated in the streamwise direction.
ECOLOGICAL THEORY. A general consumer-resource population model.
Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M
2015-08-21
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.
Comprehensive model for predicting elemental composition of coal pyrolysis products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.
Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less
NASA Technical Reports Server (NTRS)
Delaney, J. S.
1994-01-01
Oxygen is the most abundant element in most meteorites, yet the ratios of its isotopes are seldom used to constrain the compositional history of achondrites. The two major achondrite groups have O isotope signatures that differ from any plausible chondritic precursors and lie between the ordinary and carbonaceous chondrite domains. If the assumption is made that the present global sampling of chondritic meteorites reflects the variability of O reservoirs at the time of planetessimal/planet aggregation in the early nebula, then the O in these groups must reflect mixing between known chondritic reservoirs. This approach, in combination with constraints based on Fe-Mn-Mg systematics, has been used previously to model the composition of the basaltic achondrite parent body (BAP) and provides a model precursor composition that is generally consistent with previous eucrite parent body (EPB) estimates. The same approach is applied to Mars exploiting the assumption that the SNC and related meteorites sample the martian lithosphere. Model planet and planetesimal compositions can be derived by mixing of known chondritic components using O isotope ratios as the fundamental compositional constraint. The major- and minor-element composition for Mars derived here and that derived previously for the basaltic achondrite parent body are, in many respects, compatible with model compositions generated using completely independent constraints. The role of volatile elements and alkalis in particular remains a major difficulty in applying such models.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Automated analysis in generic groups
NASA Astrophysics Data System (ADS)
Fagerholm, Edvard
This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
Consumption of Mass Communication--Construction of a Model on Information Consumption Behaviour.
ERIC Educational Resources Information Center
Sepstrup, Preben
A general conceptual model on the consumption of information is introduced. Information as the output of the mass media is treated as a product, and a model on the consumption of this product is developed by merging elements from consumer behavior theory and mass communication theory. Chapter I gives basic assumptions about the individual and the…
Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam
NASA Astrophysics Data System (ADS)
Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa
2017-08-01
In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.
A quasi two-dimensional model for sound attenuation by the sonic crystals.
Gupta, A; Lim, K M; Chew, C H
2012-10-01
Sound propagation in the sonic crystal (SC) along the symmetry direction is modeled by sound propagation through a variable cross-sectional area waveguide. A one-dimensional (1D) model based on the Webster horn equation is used to obtain sound attenuation through the SC. This model is compared with two-dimensional (2D) finite element simulation and experiment. The 1D model prediction of frequency band for sound attenuation is found to be shifted by around 500 Hz with respect to the finite element simulation. The reason for this shift is due to the assumption involved in the 1D model. A quasi 2D model is developed for sound propagation through the waveguide. Sound pressure profiles from the quasi 2D model are compared with the finite element simulation and the 1D model. The result shows significant improvement over the 1D model and is in good agreement with the 2D finite element simulation. Finally, sound attenuation through the SC is computed based on the quasi 2D model and is found to be in good agreement with the finite element simulation. The quasi 2D model provides an improved method to calculate sound attenuation through the SC.
Toward a Model of Lifelong Education.
ERIC Educational Resources Information Center
Knowles, Malcolm S.
Some of the criticisms that have been leveled at the educational establishment by social analysts are discussed. It is suggested that one of the new realities is that education must be a lifelong process in order to avoid the catastrophe of human obsolescence. The assumptions and elements for a new model of education as a lifelong process are…
The Interiors of Jupiter and Saturn
NASA Astrophysics Data System (ADS)
Helled, Ravit
2018-05-01
Probing the interiors of the giant planets in our Solar System is not an easy task. This requires a set of observations combined with theoretical models that are used to infer the planetary composition and its depth dependence. The masses of Jupiter and Saturn are 318 and 96 Earth masses, respectively, and since a few decades, we know that they mostly consist of hydrogen and helium. It is the mass of heavy elements (all elements heavier than helium) that is not well determined, as well as its distribution within the planets. While the heavy elements are not the dominating materials in Jupiter and Saturn, they are the key for our understanding of their formation and evolution histories. The planetary internal structure is inferred to fit the available observational constraints including the planetary masses, radii, 1-bar temperatures, rotation rates, and gravitational fields. Then, using theoretical equations of states (EOSs) for hydrogen, helium, their mixtures, and heavier elements (typically rocks and/or ices), a structure model is developed. However, there is no unique solution for the planetary structure, and the results depend on the used EOSs and the model assumptions imposed by the modeler. Standard interior models of Jupiter and Saturn include three main regions: (1) the central region (core) that consists of heavy elements, (2) an inner metallic hydrogen envelope that is helium rich, and (3) an outer molecular hydrogen envelope depleted with helium. The distribution of heavy elements can be either homogenous or discontinuous between the two envelopes. Major model assumptions that can affect the derived internal structure include the number of layers, the heat transport mechanism within the planet (and its entropy), the nature of the core (compact vs. diluted), and the location/pressure where the envelopes are divided. Alternative structure models assume a less distinct division between the layers and/or a less non-homogenous distribution of the heavy elements. The fact that the behavior of hydrogen at high pressures and temperatures in not perfectly known, and that helium separates from hydrogen at the deep interior add sources of uncertainties to the interior model. Today, with accurate measurements of the gravitational fields of Jupiter and Saturn from the Juno and Cassini missions, structure models can be further constrained. At the same time, these measurements introduce new challenges and open question for planetary modelers.
A Finite Element Model of a White-Metzner Viscoelastic Polymer Extrudate.
1981-02-01
for Va as: Forward Difference: ail~ a WiX + Yl Jjxf + 2 1 jijX 2 + 1 32a~i jAy2 + 0i,j+l = i~j + Da~Ax 1 92a iA*2+ 1 32a *2 ’T -a .Ax + 75. h...and gyro element coincide, and 5. The rotor bearing structure is rigid. For a platform stabilized single degree of freedom gyro, these assumptions lead
Don S. Stone; Joseph E. Jakes; Jonathan Puthoff; Abdelmageed A. Elmustafa
2010-01-01
Finite element analysis is used to simulate cone indentation creep in materials across a wide range of hardness, strain rate sensitivity, and work-hardening exponent. Modeling reveals that the commonly held assumption of the hardness strain rate sensitivity (mΗ) equaling the flow stress strain rate sensitivity (mσ...
Spectral analysis method for detecting an element
Blackwood, Larry G [Idaho Falls, ID; Edwards, Andrew J [Idaho Falls, ID; Jewell, James K [Idaho Falls, ID; Reber, Edward L [Idaho Falls, ID; Seabury, Edward H [Idaho Falls, ID
2008-02-12
A method for detecting an element is described and which includes the steps of providing a gamma-ray spectrum which has a region of interest which corresponds with a small amount of an element to be detected; providing nonparametric assumptions about a shape of the gamma-ray spectrum in the region of interest, and which would indicate the presence of the element to be detected; and applying a statistical test to the shape of the gamma-ray spectrum based upon the nonparametric assumptions to detect the small amount of the element to be detected.
Motion and Stability of Saturated Soil Systems under Dynamic Loading.
1985-04-04
12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been
Estimating wildland fire rate of spread in a spatially nonuniform environment
Francis M Fujioka
1985-01-01
Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...
NASA Technical Reports Server (NTRS)
Lin, Reng Rong; Palazzolo, A. B.; Kascak, A. F.; Montague, G.
1991-01-01
Theories and tests for incorporating piezoelectric pushers as actuator devices for active vibration control are discussed. It started from a simple model with the assumption of ideal pusher characteristics and progressed to electromechanical models with nonideal pushers. Effects on system stability due to the nonideal characteristics of piezoelectric pushers and other elements in the control loop were investigated.
A rationale for human operator pulsive control behavior
NASA Technical Reports Server (NTRS)
Hess, R. A.
1979-01-01
When performing tracking tasks which involve demanding controlled elements such as those with K/s-squared dynamics, the human operator often develops discrete or pulsive control outputs. A dual-loop model of the human operator is discussed, the dominant adaptive feature of which is the explicit appearance of an internal model of the manipulator-controlled element dynamics in an inner feedback loop. Using this model, a rationale for pulsive control behavior is offered which is based upon the assumption that the human attempts to reduce the computational burden associated with time integration of sensory inputs. It is shown that such time integration is a natural consequence of having an internal representation of the K/s-squared-controlled element dynamics in the dual-loop model. A digital simulation is discussed in which a modified form of the dual-loop model is shown to be capable of producing pulsive control behavior qualitively comparable to that obtained in experiment.
Housing flexibility effects on rotor stability
NASA Technical Reports Server (NTRS)
Davis, L. B.; Wolfe, E. A.; Beatty, R. F.
1985-01-01
Preliminary rotordynamic evaluations are performed with a housing stiffness assumption that is typically determined only after the hardware is built. In addressing rotor stability, a rigid housing assumption was shown to predict an instability at a lower spin speed than a comparable flexible housing analysis. This rigid housing assumption therefore provides a conservative estimate of the stability threshold speed. A flexible housing appears to act as an energy absorber and dissipated some of the destabilizing force. The fact that a flexible housing is usually asymmetric and considerably heavier than the rotor was related to this apparent increase in rotor stability. Rigid housing analysis is proposed as a valuable screening criteria and may save time and money in construction of elaborate housing finite element models for linear stability analyses.
Thermoviscoplastic response of thin plates subjected to intense local heating
NASA Technical Reports Server (NTRS)
Byrom, Ted G.; Allen, David H.; Thornton, Earl A.
1992-01-01
A finite element method is employed to investigate the thermoviscoplastic response of a half-cylinder to intense localized transient heating. Thermoviscoplastic material behavior is characterized by the Bodner-Partom constitutive model. Structure geometry is modeled with a three-dimensional assembly of CST-DKT plate elements incorporating the large deflection von Karman assumptions. The paper compares the results of a dynamic analysis with a quasi-static analysis for the half-cylinder structure with a step-function transient temperature loading similar to that which may be encountered with shock wave interference on a hypersonic leading edge.
NASA Astrophysics Data System (ADS)
Papanicolaou, Athanasios N.; Abban, Benjamin K. B.; Dermisis, Dimitrios C.; Giannopoulos, Christos P.; Flanagan, Dennis C.; Frankenberger, James R.; Wacha, Kenneth M.
2018-01-01
An improved modeling framework for capturing the effects of space and time-variant resistance to overland flow is developed for intensively managed landscapes. The framework builds on the WEPP model but it removes the limitations of the "equivalent" plane and time-invariant roughness assumption. The enhanced model therefore accounts for spatiotemporal changes in flow resistance along a hillslope due to changes in roughness, in profile curvature, and downslope variability. The model is used to quantify the degree of influence—from individual soil grains to aggregates, "isolated roughness elements," and vegetation—on overland flow characteristics under different storm magnitudes, downslope gradients, and profile curvatures. It was found that the net effects of land use change from vegetation to a bare surface resulted in hydrograph peaks that were up to 133% larger. Changes in hillslope profile curvature instead resulted in peak runoff rate changes that were only up to 16%. The stream power concept is utilized to develop a taxonomy that relates the influence of grains, isolated roughness elements, and vegetation, on overland flow under different storm magnitudes and hillslope gradients. Critical storm magnitudes and hillslope gradients were found beyond which the effects of these landscape attributes on the peak stream power were negligible. The results also highlight weaknesses of the space/time-invariant flow resistance assumption and demonstrate that assumptions on landscape terrain characteristics exert a strong control both on the shape and magnitude of hydrographs, with deviations reaching 65% in the peak runoff when space/time-variant resistance effects are ignored in some cases.
Compliance control with embedded neural elements
NASA Technical Reports Server (NTRS)
Venkataraman, S. T.; Gulati, S.
1992-01-01
The authors discuss a control approach that embeds the neural elements within a model-based compliant control architecture for robotic tasks that involve contact with unstructured environments. Compliance control experiments have been performed on actual robotics hardware to demonstrate the performance of contact control schemes with neural elements. System parameters were identified under the assumption that environment dynamics have a fixed nonlinear structure. A robotics research arm, placed in contact with a single degree-of-freedom electromechanical environment dynamics emulator, was commanded to move through a desired trajectory. The command was implemented by using a compliant control strategy.
An assessment of finite-element modeling techniques for thick-solid/thin-shell joints analysis
NASA Technical Reports Server (NTRS)
Min, J. B.; Androlake, S. G.
1993-01-01
The subject of finite-element modeling has long been of critical importance to the practicing designer/analyst who is often faced with obtaining an accurate and cost-effective structural analysis of a particular design. Typically, these two goals are in conflict. The purpose is to discuss the topic of finite-element modeling for solid/shell connections (joints) which are significant for the practicing modeler. Several approaches are currently in use, but frequently various assumptions restrict their use. Such techniques currently used in practical applications were tested, especially to see which technique is the most ideally suited for the computer aided design (CAD) environment. Some basic thoughts regarding each technique are also discussed. As a consequence, some suggestions based on the results are given to lead reliable results in geometrically complex joints where the deformation and stress behavior are complicated.
Modelling volumetric growth in a thick walled fibre reinforced artery
NASA Astrophysics Data System (ADS)
Eriksson, T. S. E.; Watton, P. N.; Luo, X. Y.; Ventikos, Y.
2014-12-01
A novel framework for simulating growth and remodelling (G&R) of a fibre-reinforced artery, including volumetric adaption, is proposed. We show how to implement this model into a finite element framework and propose and examine two underlying assumptions for modelling growth, namely constant individual density (CID) or adaptive individual density (AID). Moreover, we formulate a novel approach which utilises a combination of both AID and CID to simulate volumetric G&R for a tissue composed of several different constituents. We consider a special case of the G&R of an artery subjected to prescribed elastin degradation and we theorise on the assumptions and suitability of CID, AID and the mixed approach for modelling arterial biology. For simulating the volumetric changes that occur during aneurysm enlargement, we observe that it is advantageous to describe the growth of collagen using CID whilst it is preferable to model the atrophy of elastin using AID.
Useful global-change scenarios: current issues and challenges
NASA Astrophysics Data System (ADS)
Parson, E. A.
2008-10-01
Scenarios are increasingly used to inform global-change debates, but their connection to decisions has been weak and indirect. This reflects the greater number and variety of potential users and scenario needs, relative to other decision domains where scenario use is more established. Global-change scenario needs include common elements, e.g., model-generated projections of emissions and climate change, needed by many users but in different ways and with different assumptions. For these common elements, the limited ability to engage diverse global-change users in scenario development requires extreme transparency in communicating underlying reasoning and assumptions, including probability judgments. Other scenario needs are specific to users, requiring a decentralized network of scenario and assessment organizations to disseminate and interpret common elements and add elements requiring local context or expertise. Such an approach will make global-change scenarios more useful for decisions, but not less controversial. Despite predictable attacks, scenario-based reasoning is necessary for responsible global-change decisions because decision-relevant uncertainties cannot be specified scientifically. The purpose of scenarios is not to avoid speculation, but to make the required speculation more disciplined, more anchored in relevant scientific knowledge when available, and more transparent.
Analysis of a homemade Edison tinfoil phonograph.
Sagers, Jason D; McNeese, Andrew R; Lenhart, Richard D; Wilson, Preston S
2012-10-01
Thomas Edison's phonograph was a landmark acoustic invention. In this paper, the phonograph is presented as a tool for education in acoustics. A brief history of the phonograph is outlined and an analogous circuit model that describes its dynamic response is discussed. Microphone and scanning laser Doppler vibrometer (SLDV) measurements were made on a homemade phonograph for model validation and inversion for unknown model parameters. SLDV measurements also conclusively illustrate where model assumptions are violated. The model elements which dominate the dynamic response are discussed.
Finite element analysis of thrust angle contact ball slewing bearing
NASA Astrophysics Data System (ADS)
Deng, Biao; Guo, Yuan; Zhang, An; Tang, Shengjin
2017-12-01
In view of the large heavy slewing bearing no longer follows the rigid ring hupothesis under the load condition, the entity finite element model of thrust angular contact ball bearing was established by using finite element analysis software ANSYS. The boundary conditions of the model were set according to the actual condition of slewing bearing, the internal stress state of the slewing bearing was obtained by solving and calculation, and the calculated results were compared with the numerical results based on the rigid ring assumption. The results show that more balls are loaded in the result of finite element method, and the maximum contact stresses between the ball and raceway have some reductions. This is because the finite element method considers the ferrule as an elastic body. The ring will produce structure deformation in the radial plane when the heavy load slewing bearings are subjected to external loads. The results of the finite element method are more in line with the actual situation of the slewing bearing in the engineering.
Dynamic Assessment and Its Implications for RTI Models
ERIC Educational Resources Information Center
Wagner, Richard K.; Compton, Donald L.
2011-01-01
Dynamic assessment refers to assessment that combines elements of instruction for the purpose of learning something about an individual that cannot be learned as easily or at all from conventional assessment. The origins of dynamic assessment can be traced to Thorndike (1924), Rey (1934), and Vygotsky (1962), who shared three basic assumptions.…
Elements and elasmobranchs: hypotheses, assumptions and limitations of elemental analysis.
McMillan, M N; Izzo, C; Wade, B; Gillanders, B M
2017-02-01
Quantifying the elemental composition of elasmobranch calcified cartilage (hard parts) has the potential to answer a range of ecological and biological questions, at both the individual and population level. Few studies, however, have employed elemental analyses of elasmobranch hard parts. This paper provides an overview of the range of applications of elemental analysis in elasmobranchs, discussing the assumptions and potential limitations in cartilaginous fishes. It also reviews the available information on biotic and abiotic factors influencing patterns of elemental incorporation into hard parts of elasmobranchs and provides some comparative elemental assays and mapping in an attempt to fill knowledge gaps. Directions for future experimental research are highlighted to better understand fundamental elemental dynamics in elasmobranch hard parts. © 2016 The Fisheries Society of the British Isles.
Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John
2014-07-01
Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Investigating Compaction by Intergranular Pressure Solution Using the Discrete Element Method
NASA Astrophysics Data System (ADS)
van den Ende, M. P. A.; Marketos, G.; Niemeijer, A. R.; Spiers, C. J.
2018-01-01
Intergranular pressure solution creep is an important deformation mechanism in the Earth's crust. The phenomenon has been frequently studied and several analytical models have been proposed that describe its constitutive behavior. These models require assumptions regarding the geometry of the aggregate and the grain size distribution in order to solve for the contact stresses and often neglect shear tractions. Furthermore, analytical models tend to overestimate experimental compaction rates at low porosities, an observation for which the underlying mechanisms remain to be elucidated. Here we present a conceptually simple, 3-D discrete element method (DEM) approach for simulating intergranular pressure solution creep that explicitly models individual grains, relaxing many of the assumptions that are required by analytical models. The DEM model is validated against experiments by direct comparison of macroscopic sample compaction rates. Furthermore, the sensitivity of the overall DEM compaction rate to the grain size and applied stress is tested. The effects of the interparticle friction and of a distributed grain size on macroscopic strain rates are subsequently investigated. Overall, we find that the DEM model is capable of reproducing realistic compaction behavior, and that the strain rates produced by the model are in good agreement with uniaxial compaction experiments. Characteristic features, such as the dependence of the strain rate on grain size and applied stress, as predicted by analytical models, are also observed in the simulations. DEM results show that interparticle friction and a distributed grain size affect the compaction rates by less than half an order of magnitude.
Modeling Endovascular Coils as Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.
2016-12-01
Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.
Soares, Sérgio R A; Bernardes, Ricardo S; Netto, Oscar de M Cordeiro
2002-01-01
The understanding of sanitation infrastructure, public health, and environmental relations is a fundamental assumption for planning sanitation infrastructure in urban areas. This article thus suggests elements for developing a planning model for sanitation infrastructure. The authors performed a historical survey of environmental and public health issues related to the sector, an analysis of the conceptual frameworks involving public health and sanitation systems, and a systematization of the various effects that water supply and sanitation have on public health and the environment. Evaluation of these effects should guarantee the correct analysis of possible alternatives, deal with environmental and public health objectives (the main purpose of sanitation infrastructure), and provide the most reasonable indication of actions. The suggested systematization of the sanitation systems effects in each step of their implementation is an advance considering the association between the fundamental elements for formulating a planning model for sanitation infrastructure.
NASA Technical Reports Server (NTRS)
Coeckelenbergh, Y.; Macelroy, R. D.; Rein, R.
1978-01-01
The investigation of specific interactions among biological molecules must take into consideration the stereochemistry of the structures. Thus, models of the molecules are essential for describing the spatial organization of potentially interacting groups, and estimations of conformation are required for a description of spatial organization. Both the function of visualizing molecules, and that of estimating conformation through calculations of energy, are part of the molecular modeling system described in the present paper. The potential uses of the system in investigating some aspects of the origin of life rest on the assumption that translation of conformation from genetic elements to catalytic elements would have been required for the development of the first replicating systems subject to the process of biological evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Li, Dongsheng; Xu, Wei
2015-04-01
In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results ofmore » the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.« less
ERIC Educational Resources Information Center
Wing, Coady; Cook, Thomas D.
2013-01-01
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
NASA Astrophysics Data System (ADS)
Jin, Qiyun; Thompson, David J.; Lurcock, Daniel E. J.; Toward, Martin G. R.; Ntotsios, Evangelos
2018-05-01
A numerical model is presented for the ground-borne vibration produced by trains running in tunnels. The model makes use of the assumption that the geometry and material properties are invariant in the axial direction. It is based on the so-called two-and-a-half dimensional (2.5D) coupled Finite Element and Boundary Element methodology, in which a two-dimensional cross-section is discretised into finite elements and boundary elements and the third dimension is represented by a Fourier transform over wavenumbers. The model is applied to a particular case of a metro line built with a cast-iron tunnel lining. An equivalent continuous model of the tunnel is developed to allow it to be readily implemented in the 2.5D framework. The tunnel structure and the track are modelled using solid and beam finite elements while the ground is modelled using boundary elements. The 2.5D track-tunnel-ground model is coupled with a train consisting of several vehicles, which are represented by multi-body models. The response caused by the passage of a train is calculated as the sum of the dynamic component, excited by the combined rail and wheel roughness, and the quasi-static component, induced by the constant moving axle loads. Field measurements have been carried out to provide experimental validation of the model. These include measurements of the vibration of the rail, the tunnel invert and the tunnel wall. In addition, simultaneous measurements were made on the ground surface above the tunnel. Rail roughness and track characterisation measurements were also made. The prediction results are compared with measured vibration obtained during train passages, with good agreement.
NASA Astrophysics Data System (ADS)
Böhnke, Frank; Scheunemann, Christian; Semmelbauer, Sebastian
2018-05-01
The propagation of traveling waves along the basilar membrane is studied in a 3D finite element model of the cochlea using single and two-tone stimulation. The advantage over former approaches is the consideration of viscous-thermal boundary layer damping which makes the usual but physically unjustified assumption of Rayleigh damping obsolete. The energy loss by viscous boundary layer damping is 70 dB lower than the actually assumed power generation by outer hair cells. The space-time course with two-tone stimulation shows the traveling waves and the periodicity of the beat frequency f2 - f1.
Comparing Experiment and Computation of Hypersonic Laminar Boundary Layers with Isolated Roughness
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Iyer, Prahladh S.; Mahesh, Krishnan; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Johansen, Craig T.
2014-01-01
Streamwise velocity profile behavior in a hypersonic laminar boundary layer in the presence of an isolated roughness element is presented for an edge Mach number of 8.2. Two different roughness element types are considered: a 2-mm tall, 4-mm diameter cylinder, and a 2-mm radius hemisphere. Measurements of the streamwise velocity behavior using nitric oxide (NO) planar laser-induced fluorescence (PLIF) molecular tagging velocimetry (MTV) have been performed on a 20-degree wedge model. The top surface of this model acts as a flat-plate and is oriented at 5 degrees with respect to the freestream flow. Computations using direct numerical simulation (DNS) of these flows have been performed and are compared to the measured velocity profiles. Particular attention is given to the characteristics of velocity profiles immediately upstream and downstream of the roughness elements. In these regions, the streamwise flow can experience strong deceleration or acceleration. An analysis in which experimentally measured MTV profile displacements are compared with DNS particle displacements is performed to determine if the assumption of constant velocity over the duration of the MTV measurement is valid. This assumption is typically made when reporting MTV-measured velocity profiles, and may result in significant errors when comparing MTV measurements to computations in regions with strong deceleration or acceleration. The DNS computations with the cylindrical roughness element presented in this paper were performed with and without air injection from a rectangular slot upstream of the cylinder. This was done to determine the extent to which gas seeding in the MTV measurements perturbs the boundary layer flowfield.
Chemical evolution of the Earth: Equilibrium or disequilibrium process?
NASA Technical Reports Server (NTRS)
Sato, M.
1985-01-01
To explain the apparent chemical incompatibility of the Earth's core and mantle or the disequilibrium process, various core forming mechanisms have been proposed, i.e., rapid disequilibrium sinking of molten iron, an oxidized core or protocore materials, and meteorite contamination of the upper mantle after separation from the core. Adopting concepts used in steady state thermodynamics, a method is devised for evaluating how elements should distribute stable in the Earth's interior for the present gradients of temperature, pressure, and gravitational acceleration. Thermochemical modeling gives useful insights into the nature of chemical evolution of the Earth without overly speculative assumptions. Further work must be done to reconcile siderophile elements, rare gases, and possible light elements in the outer core.
Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets
NASA Technical Reports Server (NTRS)
Williams, George J., Jr.; Stiegemeier, Benjamin R.
2013-01-01
Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.
Chemical fractionation of siderophile elements in impactites from Australian meteorite craters
NASA Technical Reports Server (NTRS)
Attrep, A., Jr.; Orth, C. J.; Quintana, L. R.; Shoemaker, C. S.; Shoemaker, E. M.; Taylor, S. R.
1991-01-01
The abundance pattern of siderophile elements in terrestrial and lunar impact melt rocks was used extensively to infer the nature of the impacting projectiles. An implicit assumption made is that the siderophile abundance ratios of the projectiles are approximately preserved during mixing of the projectile constituents with the impact melts. As this mixture occurs during flow of strongly shocked materials at high temperatures, however there are grounds for suspecting that the underlying assumption is not always valid. In particular, fractionation of the melted and partly vaporized material of the projectile might be expected because of differences in volatility, solubility in silicate melts, and other characteristics of the constituent elements. Impactites from craters with associated meteorites offer special opportunities to test the assumptions on which projectile identifications are based and to study chemical fractionation that occurred during the impact process.
Improved Finite Element Modeling of the Turbofan Engine Inlet Radiation Problem
NASA Technical Reports Server (NTRS)
Roy, Indranil Danda; Eversman, Walter; Meyer, H. D.
1993-01-01
Improvements have been made in the finite element model of the acoustic radiated field from a turbofan engine inlet in the presence of a mean flow. The problem of acoustic radiation from a turbofan engine inlet is difficult to model numerically because of the large domain and high frequencies involved. A numerical model with conventional finite elements in the near field and wave envelope elements in the far field has been constructed. By employing an irrotational mean flow assumption, both the mean flow and the acoustic perturbation problem have been posed in an axisymmetric formulation in terms of the velocity potential; thereby minimizing computer storage and time requirements. The finite element mesh has been altered in search of an improved solution. The mean flow problem has been reformulated with new boundary conditions to make it theoretically rigorous. The sound source at the fan face has been modeled as a combination of positive and negative propagating duct eigenfunctions. Therefore, a finite element duct eigenvalue problem has been solved on the fan face and the resulting modal matrix has been used to implement a source boundary condition on the fan face in the acoustic radiation problem. In the post processing of the solution, the acoustic pressure has been evaluated at Gauss points inside the elements and the nodal pressure values have been interpolated from them. This has significantly improved the results. The effect of the geometric position of the transition circle between conventional finite elements and wave envelope elements has been studied and it has been found that the transition can be made nearer to the inlet than previously assumed.
A New Computational Methodology for Structural Dynamics Problems
2008-04-01
by approximating the geometry of the midsurface of the shell (as in continuum-based finite element models), are prevented from the beginning...iiθ , such that the surface 03=θ defines the midsurface ( )R tM M of the region ( )R tB B . The coordinate 3θ is the measure of the distance...assumption for the shell model: “the displacement field is considered as a linear expansion of the thickness coordinate around the midsurface . The
Space-Time Adaptive Processing for Airborne Radar
1994-12-13
horizontal plane Uniform linear antenna array (possibly columns of a planar array) Identical element patterns 13 14 15 9 7 7,33 7 7 Target Model ...Parameters for Example Scenario 31 3 Assumptions Made for Radar System and Signal Model 52 4 Platform and Interference Scenario for Baseline Scenario. 61 5...pulses, is addressed first. Fully adaptive STAP requires the solution to a system of linear equations of size MN, where N is the number of array
Yield Behavior of Solution Treated and Aged Ti-6Al-4V
NASA Technical Reports Server (NTRS)
Ring, Andrew J.; Baker, Eric H.; Salem, Jonathan A.; Thesken, John C.
2014-01-01
Post yield uniaxial tension-compression tests were run on a solution treated and aged (STA), titanium 6-percent aluminum 4-percent vanadium (Ti-6Al-4V) alloy to determine the yield behavior on load reversal. The material exhibits plastic behavior almost immediately on load reversal implying a strong Bauschinger effect. The resultant stress-strain data was compared to a 1D mechanics model and a finite element model used to design a composite overwrapped pressure vessel (COPV). Although the models and experimental data compare well for the initial loading and unloading in the tensile regime, agreement is lost in the compressive regime due to the Bauschinger effect and the assumption of perfect plasticity. The test data presented here are being used to develop more accurate cyclic hardening constitutive models for future finite element design analysis of COPVs.
NASA Technical Reports Server (NTRS)
Foye, R. L.
1993-01-01
This report concerns the prediction of the elastic moduli and the internal stresses within the unit cell of a fabric reinforced composite. In the proposed analysis no restrictions or assumptions are necessary concerning yarn or tow cross-sectional shapes or paths through the unit cell but the unit cell itself must be a right hexagonal parallelepiped. All the unit cell dimensions are assumed to be small with respect to the thickness of the composite structure that it models. The finite element analysis of a unit cell is usually complicated by the mesh generation problems and the non-standard, adjacent-cell boundary conditions. This analysis avoids these problems through the use of preprogrammed boundary conditions and replacement materials (or elements). With replacement elements it is not necessary to match all the constitutional material interfaces with finite element boundaries. Simple brick-shaped elements can be used to model the unit cell structure. The analysis predicts the elastic constants and the average stresses within each constituent material of each brick element. The application and results of this analysis are demonstrated through several example problems which include a number of composite microstructures.
NASA Technical Reports Server (NTRS)
Longhi, J.
1977-01-01
A description is presented of an empirical model of fractional crystallization which predicts that slightly modified versions of certain of the proposed whole moon compositions can reproduce the major-element chemistry and mineralogy of most of the primitive highland rocks through equilibrium and fractional crystallization processes combined with accumulation of crystals and trapping of residual liquids. These compositions contain sufficient Al to form a plagioclase-rich crust 60 km thick on top of a magma ocean that was initially no deeper than about 300 km. Implicit in the model are the assumptions that all cooling and crystallization take place at low pressure and that there are no compositional or thermal gradients in the liquid. Discussions of the cooling and crystallization of the proposed magma ocean show these assumptions to be disturbingly naive when applied to the ocean as a whole. However, the model need not be applied to the whole ocean, but only to layers of cooling liquid near the surface.
Three-dimensional elastic-plastic finite-element analysis of fatigue crack propagation
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Chermahini, R. G.
1985-01-01
Fatigue cracks are a major problem in designing structures subjected to cyclic loading. Cracks frequently occur in structures such as aircraft and spacecraft. The inspection intervals of many aircraft structures are based on crack-propagation lives. Therefore, improved prediction of propagation lives under flight-load conditions (variable-amplitude loading) are needed to provide more realistic design criteria for these structures. The main thrust was to develop a three-dimensional, nonlinear, elastic-plastic, finite element program capable of extending a crack and changing boundary conditions for the model under consideration. The finite-element model is composed of 8-noded (linear-strain) isoparametric elements. In the analysis, the material is assumed to be elastic-perfectly plastic. The cycle stress-strain curve for the material is shown Zienkiewicz's initial-stress method, von Mises's yield criterion, and Drucker's normality condition under small-strain assumptions are used to account for plasticity. The three-dimensional analysis is capable of extending the crack and changing boundary conditions under cyclic loading.
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
NASA Astrophysics Data System (ADS)
Kerst, Stijn; Shyrokau, Barys; Holweg, Edward
2018-05-01
This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.
Application Note: Power Grid Modeling With Xyce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sholander, Peter E.
This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.
Experimental data showing the thermal behavior of a flat roof with phase change material.
Tokuç, Ayça; Başaran, Tahsin; Yesügey, S Cengiz
2015-12-01
The selection and configuration of building materials for optimal energy efficiency in a building require some assumptions and models for the thermal behavior of the utilized materials. Although the models for many materials can be considered acceptable for simulation and calculation purposes, the work for modeling the real time behavior of phase change materials is still under development. The data given in this article shows the thermal behavior of a flat roof element with a phase change material (PCM) layer. The temperature and energy given to and taken from the building element are reported. In addition the solid-liquid behavior of the PCM is tracked through images. The resulting thermal behavior of the phase change material is discussed and simulated in [1] A. Tokuç, T. Başaran, S.C. Yesügey, An experimental and numerical investigation on the use of phase change materials in building elements: the case of a flat roof in Istanbul, Build. Energy, vol. 102, 2015, pp. 91-104.
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan
2016-01-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pacini, Benjamin Robert; Mayes, Randall L.; Roettgen, Daniel R
2015-10-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
Aligning physical elements with persons' attitude: an approach using Rasch measurement theory
NASA Astrophysics Data System (ADS)
Camargo, F. R.; Henson, B.
2013-09-01
Affective engineering uses mathematical models to convert the information obtained from persons' attitude to physical elements into an ergonomic design. However, applications in the domain have not in many cases met measurement assumptions. This paper proposes a novel approach based on Rasch measurement theory to overcome the problem. The research demonstrates that if data fit the model, further variables can be added to a scale. An empirical study was designed to determine the range of compliance where consumers could obtain an impression of a moisturizer cream when touching some product containers. Persons, variables and stimulus objects were parameterised independently on a linear continuum. The results showed that a calibrated scale preserves comparability although incorporating further variables.
Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.
2004-01-01
For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.
Proposed best practice for projects that involve modelling and simulation.
O'Kelly, Michael; Anisimov, Vladimir; Campbell, Chris; Hamilton, Sinéad
2017-03-01
Modelling and simulation has been used in many ways when developing new treatments. To be useful and credible, it is generally agreed that modelling and simulation should be undertaken according to some kind of best practice. A number of authors have suggested elements required for best practice in modelling and simulation. Elements that have been suggested include the pre-specification of goals, assumptions, methods, and outputs. However, a project that involves modelling and simulation could be simple or complex and could be of relatively low or high importance to the project. It has been argued that the level of detail and the strictness of pre-specification should be allowed to vary, depending on the complexity and importance of the project. This best practice document does not prescribe how to develop a statistical model. Rather, it describes the elements required for the specification of a project and requires that the practitioner justify in the specification the omission of any of the elements and, in addition, justify the level of detail provided about each element. This document is an initiative of the Special Interest Group for modelling and simulation. The Special Interest Group for modelling and simulation is a body open to members of Statisticians in the Pharmaceutical Industry and the European Federation of Statisticians in the Pharmaceutical Industry. Examples of a very detailed specification and a less detailed specification are included as appendices. Copyright © 2016 John Wiley & Sons, Ltd.
The possible modifications of the Hisse model for pure LANDSAT agricultural data
NASA Technical Reports Server (NTRS)
Peters, C.
1982-01-01
An idea, due to A. Feiveson, is presented for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field). Theoretical arguments are given which show that any significant refinement of the model beyond Feiveson's proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Guassian with a covariance function which exhibits exponential decay with respect to spatial separation.
The possible modifications of the HISSE model for pure LANDSAT agricultural data
NASA Technical Reports Server (NTRS)
Peters, C.
1981-01-01
A method for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field) is discussed. Theoretical arguments are given which show that any significant refinement of the model beyond this proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Gaussian with a covariance function which exhibits exponential decay with respect to spatial separation.
Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions
NASA Technical Reports Server (NTRS)
Knudsen, Erik
2003-01-01
The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to determine the contact patch dimensions.
Cylindrical heat conduction and structural acoustic models for enclosed fiber array thermophones.
Dzikowicz, Benjamin R; Tressler, James F; Baldwin, Jeffrey W
2017-11-01
Calculation of the heat loss for thermophone heating elements is a function of their geometry and the thermodynamics of their surroundings. Steady-state behavior is difficult to establish or evaluate as heat is only flowing in one direction in the device. However, for a heating element made from an array of carbon fibers in a planar enclosure, several assumptions can be made, leading to simple solutions of the heat equation. These solutions can be used to more carefully determine the efficiency of thermophones of this geometry. Acoustic response is predicted with the application of a Helmholtz resonator and thin plate structural acoustics models. A laboratory thermophone utilizing a sparse horizontal array of fine (6.7 μm diameter) carbon fibers is designed and tested. Experimental results are compared with the model. The model is also used to examine the optimal array density for maximal efficiency.
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.; Thompson, Richard A.
2009-01-01
A description of models and boundary conditions required for coupling radiation and ablation physics to a hypersonic flow simulation is provided. Chemical equilibrium routines for varying elemental mass fraction are required in the flow solver to integrate with the equilibrium chemistry assumption employed in the ablation models. The capability also enables an equilibrium catalytic wall boundary condition in the non-ablating case. The paper focuses on numerical implementation issues using FIRE II, Mars return, and Apollo 4 applications to provide context for discussion. Variable relaxation factors applied to the Jacobian elements of partial equilibrium relations required for convergence are defined. Challenges of strong radiation coupling in a shock capturing algorithm are addressed. Results are presented to show how the current suite of models responds to a wide variety of conditions involving coupled radiation and ablation.
NASA Astrophysics Data System (ADS)
Silva, Luís Carlos; Milani, Gabriele; Lourenço, Paulo B.
2017-11-01
Two finite element homogenized-based strategies are presented for the out-of-plane behaviour characterization of an English bond masonry wall. A finite element micro-modelling approach using Cauchy stresses and first order movements are assumed for both strategies. The material nonlinearity is lumped on joints interfaces and bricks are considered elastic. Nevertheless, the first model is based on a Plane-stress assumption, in which the out-of-plane quantities are derived through on-thickness wall integration considering a Kirchhoff-plate theory. The second model is a tridimensional one, in which the homogenized out-of-plane quantities can be directly derived after solving the boundary value problem. The comparison is conducted by assessing the obtained out-of-plane bending- and torsion-curvature diagrams. A good agreement is found for the present study case.
NASA Technical Reports Server (NTRS)
Walker, A. B. C., Jr.; Rugge, H. R.; Weiss, K.
1974-01-01
Permitted lines in the optically thin coronal X-ray spectrum were analyzed to find the distribution of coronal material, as a function of temperature, without special assumptions concerning coronal conditions. The resonance lines of N, O, Ne, Na, Mg, Al, Si, S, and Ar which dominate the quiet coronal spectrum below 25A were observed. Coronal models were constructed and the relative abundances of these elements were determined. The intensity in the lines of the 2p-3d transitions near 15A was used in conjunction with these coronal models, with the assumption of coronal excitation, to determine the Fe XVII abundance. The relative intensities of the 2p-3d Fe XVII lines observed in the corona agreed with theoretical prediction. Using a more complete theoretical model, and higher resolution observations, a revised calculation of iron abundance relative to hydrogen of 0.000026 was made.
STRUCTURAL DYNAMICS OF METAL PARTITIONING TO MINERAL SURFACES
The conceptual understanding of surface complexation reactions that control trace element partitioning to mineral surfaces is limited by the assumption that the solid reactant possesses a finite, time-invariant population of surface functional groups. This assumption has limited...
Shape: A 3D Modeling Tool for Astrophysics.
Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus
2011-04-01
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
Modeling Array Stations in SIG-VISA
NASA Astrophysics Data System (ADS)
Ding, N.; Moore, D.; Russell, S.
2013-12-01
We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.
Behavior of some singly ionized, heavy-ion impurities during compression in a theta-pinch plasma
NASA Technical Reports Server (NTRS)
Jalufka, N. W.
1975-01-01
The introduction of a small percentage of an impurity gas containing a desired element into a theta-pinch plasma is a standard procedure used to investigate the spectra and atomic processes of the element. This procedure assumes that the mixing ratio of impurity-to-fill gases remains constant during the collapse and heating phase. Spectroscopic investigations of the constant-mixing-ratio assumption for a 2% neon and argon impurity verifies the assumption only for the neon impurity. However, for the 2% argon impurity, only 20 to 25% of the argon is in the high-temperature compressed plasma. It is concluded that the constant-mixing-ratio assumption is not applicable to the argon impurity.
Abbott, M.L.; Susong, D.D.; Krabbenhoft, D.P.; Rood, A.S.
2002-01-01
Mercury (total and methyl) was evaluated in snow samples collected near a major mercury emission source on the Idaho National Engineering and Environmental Laboratory (INEEL) in southeastern Idaho and 160 km downwind in Teton Range in western Wyoming. The sampling was done to assess near-field (<12 km) deposition rates around the source, compare them to those measured in a relatively remote, pristine downwind location, and to use the measurements to develop improved, site-specific model input parameters for precipitation scavenging coefficient and the fraction of Hg emissions deposited locally. Measured snow water concentrations (ng L-1) were converted to deposition (ug m-2) using the sample location snow water equivalent. The deposition was then compared to that predicted using the ISC3 air dispersion/deposition model which was run with a range of particle and vapor scavenging coefficient input values. Accepted model statistical performance measures (fractional bias and normalized mean square error) were calculated for the different modeling runs, and the best model performance was selected. Measured concentrations close to the source (average = 5.3 ng L-1) were about twice those measured in the Teton Range (average = 2.7 ng L-1) which were within the expected range of values for remote background areas. For most of the sampling locations, the ISC3 model predicted within a factor of two of the observed deposition. The best modeling performance was obtained using a scavenging coefficient value for 0.25 ??m diameter particulate and the assumption that all of the mercury is reactive Hg(II) and subject to local deposition. A 0.1 ??m particle assumption provided conservative overprediction of the data, while a vapor assumption resulted in highly variable predictions. Partitioning a fraction of the Hg emissions to elemental Hg(0) (a U.S. EPA default assumption for combustion facility risk assessments) would have underpredicted the observed fallout.
GVE-Based Dynamics and Control for Formation Flying Spacecraft
NASA Technical Reports Server (NTRS)
Breger, Louis; How, Jonathan P.
2004-01-01
Formation flying is an enabling technology for many future space missions. This paper presents extensions to the equations of relative motion expressed in Keplerian orbital elements, including new initialization techniques for general formation configurations. A new linear time-varying form of the equations of relative motion is developed from Gauss Variational Equations and used in a model predictive controller. The linearizing assumptions for these equations are shown to be consistent with typical formation flying scenarios. Several linear, convex initialization techniques are presented, as well as a general, decentralized method for coordinating a tetrahedral formation using differential orbital elements. Control methods are validated using a commercial numerical propagator.
Anssari-Benam, Afshin; Bucchi, Andrea; Bader, Dan L
2015-09-18
Discrete element models have often been the primary tool in investigating and characterising the viscoelastic behaviour of soft tissues. However, studies have employed varied configurations of these models, based on the choice of the number of elements and the utilised formation, for different subject tissues. This approach has yielded a diverse array of viscoelastic models in the literature, each seemingly resulting in different descriptions of viscoelastic constitutive behaviour and/or stress-relaxation and creep functions. Moreover, most studies do not apply a single discrete element model to characterise both stress-relaxation and creep behaviours of tissues. The underlying assumption for this disparity is the implicit perception that the viscoelasticity of soft tissues cannot be described by a universal behaviour or law, resulting in the lack of a unified approach in the literature based on discrete element representations. This paper derives the constitutive equation for different viscoelastic models applicable to soft tissues with two characteristic times. It demonstrates that all possible configurations exhibit a unified and universal behaviour, captured by a single constitutive relationship between stress, strain and time as: σ+Aσ̇+Bσ¨=Pε̇+Qε¨. The ensuing stress-relaxation G(t) and creep J(t) functions are also unified and universal, derived as [Formula: see text] and J(t)=c2+(ε0-c2)e(-PQt)+σ0Pt, respectively. Application of these relationships to experimental data is illustrated for various tissues including the aortic valve, ligament and cerebral artery. The unified model presented in this paper may be applied to all tissues with two characteristic times, obviating the need for employing varied configurations of discrete element models in preliminary investigation of the viscoelastic behaviour of soft tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chróścielewski, Jacek; Schmidt, Rüdiger; Eremeyev, Victor A.
2018-05-01
This paper addresses modeling and finite element analysis of the transient large-amplitude vibration response of thin rod-type structures (e.g., plane curved beams, arches, ring shells) and its control by integrated piezoelectric layers. A geometrically nonlinear finite beam element for the analysis of piezolaminated structures is developed that is based on the Bernoulli hypothesis and the assumptions of small strains and finite rotations of the normal. The finite element model can be applied to static, stability, and transient analysis of smart structures consisting of a master structure and integrated piezoelectric actuator layers or patches attached to the upper and lower surfaces. Two problems are studied extensively: (i) FE analyses of a clamped semicircular ring shell that has been used as a benchmark problem for linear vibration control in several recent papers are critically reviewed and extended to account for the effects of structural nonlinearity and (ii) a smart circular arch subjected to a hydrostatic pressure load is investigated statically and dynamically in order to study the shift of bifurcation and limit points, eigenfrequencies, and eigenvectors, as well as vibration control for loading conditions which may lead to dynamic loss of stability.
Elements of a Research Report.
ERIC Educational Resources Information Center
Schurter, William J.
This guide for writing research or technical reports discusses eleven basic elements of such reports and provides examples of "good" and "bad" wordings. These elements are the title, problem statement, purpose statement, need statement, hypothesis, assumptions, procedures, limitations, terminology, conclusion and recommendations. This guide is…
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
2018-03-27
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2014-05-01
Modeling complex vibroacoustic systems including poroelastic materials using finite element based methods can be unfeasible for practical applications. For this reason, analytical approaches such as the transfer matrix method are often preferred to obtain a quick estimation of the vibroacoustic parameters. However, the strong assumptions inherent within the transfer matrix method lead to a lack of accuracy in the description of the geometry of the system. As a result, the transfer matrix method is inherently limited to the high frequency range. Nowadays, hybrid substructuring procedures have become quite popular. Indeed, different modeling techniques are typically sought to describe complex vibroacoustic systems over the widest possible frequency range. As a result, the flexibility and accuracy of the finite element method and the efficiency of the transfer matrix method could be coupled in a hybrid technique to obtain a reduction of the computational burden. In this work, a hybrid methodology is proposed. The performances of the method in predicting the vibroacoutic indicators of flat structures with attached homogeneous acoustic treatments are assessed. The results prove that, under certain conditions, the hybrid model allows for a reduction of the computational effort while preserving enough accuracy with respect to the full finite element solution.
Isosemantic rendering of clinical information using formal ontologies and RDF.
Martínez-Costa, Catalina; Bosca, Diego; Legaz-García, Mari Carmen; Tao, Cui; Fernández Breis, Jesualdo Tomás; Schulz, Stefan; Chute, Christopher G
2013-01-01
The generation of a semantic clinical infostructure requires linking ontologies, clinical models and terminologies [1]. Here we describe an approach that would permit data coming from different sources and represented in different standards to be queried in a homogeneous and integrated way. Our assumption is that data providers should be able to agree and share the meaning of the data they want to exchange and to exploit. We will describe how Clinical Element Model (CEM) and OpenEHR datasets can be jointly exploited in Semantic Web environments.
Comparison of NASTRAN analysis with ground vibration results of UH-60A NASA/AEFA test configuration
NASA Technical Reports Server (NTRS)
Idosor, Florentino; Seible, Frieder
1990-01-01
Preceding program flight tests, a ground vibration test and modal test analysis of a UH-60A Black Hawk helicopter was conducted by Sikorsky Aircraft to complement the UH-60A test plan and NASA/ARMY Modern Technology Rotor Airloads Program. The 'NASA/AEFA' shake test configuration was tested for modal frequencies and shapes and compared with its NASTRAN finite element model counterpart to give correlative results. Based upon previous findings, significant differences in modal data existed and were attributed to assumptions regarding the influence of secondary structure contributions in the preliminary NASTRAN modeling. An analysis of an updated finite element model including several secondary structural additions has confirmed that the inclusion of specific secondary components produces a significant effect on modal frequency and free-response shapes and improves correlations at lower frequencies with shake test data.
NASA Astrophysics Data System (ADS)
Mahler, Michael; Gaganidze, Ermile; Aktaa, Jarir
2018-04-01
The experimental observation of anisotropic fracture behaviour of round blank polycrystalline tungsten was simulated using finite element (FE) method in combination with cohesive zone model. Experiments in the past had shown that due to the anisotropic microstructure the fracture toughness varies by factor of about two for different orientations. The reason is the crack propagation direction, which is - in some orientations - not the typical crack propagation direction for mode I fracture. In some directions the crack is not growing perpendicular to the crack opening tensile load. Nevertheless, in the present paper, the microstructure is modelled by FE mesh including cohesive zone elements which mimic grain boundaries (GB). This is based on the assumption that GB's are the weakest links in the structure. The use of the correct parameters to describe the fracture process allows the description of the observed experimental orientation dependent fracture toughness.
Influence of Young's moduli in 3D fluid-structure coupled models of the human cochlea
NASA Astrophysics Data System (ADS)
Böhnke, Frank; Semmelbauer, Sebastian; Marquardt, Torsten
2015-12-01
The acoustic wave propagation in the human cochlea was studied using a tapered box-model with linear assumptions respective to all mechanical parameters. The discretisation and evaluation is conducted by a commercial finite element package (ANSYS). The main difference to former models of the cochlea was the representation of the basilar membrane by a 3D elastic solid. The Young's moduli of this solid were modified to study their influence on the travelling wave. The lymph in the scala vestibuli and scala tympani was represented by a viscous and nearly incompressible fluid finite element approach. Our results show the maximum displacement for f = 2kHz at half of the length of the cochlea in accordance with former experiments. For low frequencies f <200 Hz nearly zero phase shifts were found, whereas for f =1 kHz it reaches values up to -12 cycles depending on the degree of orthotropy.
Local lubrication model for spherical particles within incompressible Navier-Stokes flows.
Lambert, B; Weynans, L; Bergmann, M
2018-03-01
The lubrication forces are short-range hydrodynamic interactions essential to describe suspension of the particles. Usually, they are underestimated in direct numerical simulations of particle-laden flows. In this paper, we propose a lubrication model for a coupled volume penalization method and discrete element method solver that estimates the unresolved hydrodynamic forces and torques in an incompressible Navier-Stokes flow. Corrections are made locally on the surface of the interacting particles without any assumption on the global particle shape. The numerical model has been validated against experimental data and performs as well as existing numerical models that are limited to spherical particles.
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
Modeling plasticity by non-continuous deformation
NASA Astrophysics Data System (ADS)
Ben-Shmuel, Yaron; Altus, Eli
2017-10-01
Plasticity and failure theories are still subjects of intense research. Engineering constitutive models on the macroscale which are based on micro characteristics are very much in need. This study is motivated by the observation that continuum assumptions in plasticity in which neighbour material elements are inseparable at all-time are physically impossible, since local detachments, slips and neighbour switching must operate, i.e. non-continuous deformation. Material microstructure is modelled herein by a set of point elements (particles) interacting with their neighbours. Each particle can detach from and/or attach with its neighbours during deformation. Simulations on two- dimensional configurations subjected to uniaxial compression cycle are conducted. Stochastic heterogeneity is controlled by a single "disorder" parameter. It was found that (a) macro response resembles typical elasto-plastic behaviour; (b) plastic energy is proportional to the number of detachments; (c) residual plastic strain is proportional to the number of attachments, and (d) volume is preserved, which is consistent with macro plastic deformation. Rigid body displacements of local groups of elements are also observed. Higher disorder decreases the macro elastic moduli and increases plastic energy. Evolution of anisotropic effects is obtained with no additional parameters.
Finite element formulation of viscoelastic sandwich beams using fractional derivative operators
NASA Astrophysics Data System (ADS)
Galucio, A. C.; Deü, J.-F.; Ohayon, R.
This paper presents a finite element formulation for transient dynamic analysis of sandwich beams with embedded viscoelastic material using fractional derivative constitutive equations. The sandwich configuration is composed of a viscoelastic core (based on Timoshenko theory) sandwiched between elastic faces (based on Euler-Bernoulli assumptions). The viscoelastic model used to describe the behavior of the core is a four-parameter fractional derivative model. Concerning the parameter identification, a strategy to estimate the fractional order of the time derivative and the relaxation time is outlined. Curve-fitting aspects are focused, showing a good agreement with experimental data. In order to implement the viscoelastic model into the finite element formulation, the Grünwald definition of the fractional operator is employed. To solve the equation of motion, a direct time integration method based on the implicit Newmark scheme is used. One of the particularities of the proposed algorithm lies in the storage of displacement history only, reducing considerably the numerical efforts related to the non-locality of fractional operators. After validations, numerical applications are presented in order to analyze truncation effects (fading memory phenomena) and solution convergence aspects.
Rapid Discovery of Tribological Materials with Improved Performance Using Materials Informatics
2014-03-10
of New Solid State Lubricants The recursive portioning model illustrated in Fig. 3 has been applied to about 500 compounds from the FileMakerPro...neighboring cation. Based on this assumption, the large cationic charge of mineral compounds indicates the number of anions tends to be larger than the...The formation of bond types is highly dependent on the difference of electronegativity (EN) between the two elements in the compound . For instance
Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)
NASA Astrophysics Data System (ADS)
Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi
2017-03-01
The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2015-02-01
Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
NASA Astrophysics Data System (ADS)
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
Applying ecological models to communities of genetic elements: the case of neutral theory.
Linquist, Stefan; Cottenie, Karl; Elliott, Tyler A; Saylor, Brent; Kremer, Stefan C; Gregory, T Ryan
2015-07-01
A promising recent development in molecular biology involves viewing the genome as a mini-ecosystem, where genetic elements are compared to organisms and the surrounding cellular and genomic structures are regarded as the local environment. Here, we critically evaluate the prospects of ecological neutral theory (ENT), a popular model in ecology, as it applies at the genomic level. This assessment requires an overview of the controversy surrounding neutral models in community ecology. In particular, we discuss the limitations of using ENT both as an explanation of community dynamics and as a null hypothesis. We then analyse a case study in which ENT has been applied to genomic data. Our central finding is that genetic elements do not conform to the requirements of ENT once its assumptions and limitations are made explicit. We further compare this genome-level application of ENT to two other, more familiar approaches in genomics that rely on neutral mechanisms: Kimura's molecular neutral theory and Lynch's mutational-hazard model. Interestingly, this comparison reveals that there are two distinct concepts of neutrality associated with these models, which we dub 'fitness neutrality' and 'competitive neutrality'. This distinction helps to clarify the various roles for neutral models in genomics, for example in explaining the evolution of genome size. © 2015 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Cumulative rain fade statistics are used by space communications engineers to establish transmitter power and receiver sensitivities for systems operating under various geometries, climates, and radio frequencies. Space-diversity performance criteria are also of interest. This work represents a review, in which are examined the many elements involved in the employment of single nonattenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenuation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain-attenuation modeling. Suggestions are made for improving present methods.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Making Predictions about Chemical Reactivity: Assumptions and Heuristics
ERIC Educational Resources Information Center
Maeyer, Jenine; Talanquer, Vicente
2013-01-01
Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…
Extended Glauert tip correction to include vortex rollup effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maniaci, David; Schmitz, Sven
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Extended Glauert tip correction to include vortex rollup effects
Maniaci, David; Schmitz, Sven
2016-10-03
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Further evidence for the EPNT assumption
NASA Technical Reports Server (NTRS)
Greenberger, Daniel M.; Bernstein, Herbert J.; Horne, Michael; Zeilinger, Anton
1994-01-01
We recently proved a theorem extending the Greenberger-Horne-Zeilinger (GHZ) Theorem from multi-particle systems to two-particle systems. This proof depended upon an auxiliary assumption, the EPNT assumption (Emptiness of Paths Not Taken). According to this assumption, if there exists an Einstein-Rosen-Podolsky (EPR) element of reality that determines that a path is empty, then there can be no entity associated with the wave that travels this path (pilot-waves, empty waves, etc.) and reports information to the amplitude, when the paths recombine. We produce some further evidence in support of this assumption, which is certainly true in quantum theory. The alternative is that such a pilot-wave theory would have to violate EPR locality.
Space shuttle launch vehicle aerodynamic uncertainties: Lessons learned
NASA Technical Reports Server (NTRS)
Hamilton, J. T.
1983-01-01
The chronological development and evolution of an uncertainties model which defines the complex interdependency and interaction of the individual Space Shuttle element and component uncertainties for the launch vehicle are presented. Emphasis is placed on user requirements which dictated certain concessions, simplifications, and assumptions in the analytical model. The use of the uncertainty model in the vehicle design process and flight planning support is discussed. The terminology and justification associated with tolerances as opposed to variations are also presented. Comparisons of and conclusions drawn from flight minus predicted data and uncertainties are given. Lessons learned from the Space Shuttle program concerning aerodynamic uncertainties are examined.
A high fidelity real-time simulation of a small turboshaft engine
NASA Technical Reports Server (NTRS)
Ballin, Mark G.
1988-01-01
A high-fidelity component-type model and real-time digital simulation of the General Electric T700-GE-700 turboshaft engine were developed for use with current generation real-time blade-element rotor helicopter simulations. A control system model based on the specification fuel control system used in the UH-60A Black Hawk helicopter is also presented. The modeling assumptions and real-time digital implementation methods particular to the simulation of small turboshaft engines are described. The validity of the simulation is demonstrated by comparison with analysis-oriented simulations developed by the manufacturer, available test data, and flight-test time histories.
Payload accommodation and development planning tools - A Desktop Resource Leveling Model (DRLM)
NASA Technical Reports Server (NTRS)
Hilchey, John D.; Ledbetter, Bobby; Williams, Richard C.
1989-01-01
The Desktop Resource Leveling Model (DRLM) has been developed as a tool to rapidly structure and manipulate accommodation, schedule, and funding profiles for any kind of experiments, payloads, facilities, and flight systems or other project hardware. The model creates detailed databases describing 'end item' parameters, such as mass, volume, power requirements or costs and schedules for payload, subsystem, or flight system elements. It automatically spreads costs by calendar quarters and sums costs or accommodation parameters by total project, payload, facility, payload launch, or program phase. Final results can be saved or printed out, automatically documenting all assumptions, inputs, and defaults.
Assessment and Computerized Modeling of the Environmental Deposition of Military Smokes
1990-10-05
assumption of randomness implies that past knowl- I edge has no bearing on the occurrence of any f, ture event1, the probability distribution of finding...of these levels, the wind speed was measured with a Gill three-cup anemometer. This anemometer consists of a vertical bearing -mounted spindle with...first class of instruments we have the ý-gage, the piezoelectric microbalance, and the tapered element oscillating microbalance. Other types of real-time
Quantifying Wrinkle Features of Thin Membrane Structures
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.
2004-01-01
For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1998-01-01
A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.
Environmental compatibility of closed landfills - assessing future pollution hazards.
Laner, David; Fellner, Johann; Brunner, Paul H
2011-01-01
Municipal solid waste landfills need to be managed after closure. This so-called aftercare comprises the treatment and monitoring of residual emissions as well as the maintenance and control of landfill elements. The measures can be terminated when a landfill does not pose a threat to the environment any more. Consequently, the evaluation of landfill environmental compatibility includes an estimation of future pollution hazards as well as an assessment of the vulnerability of the affected environment. An approach to assess future emission rates is presented and discussed in view of long-term environmental compatibility. The suggested method consists (a) of a continuous model to predict emissions under the assumption of constant landfill conditions, and (b) different scenarios to evaluate the effects of changing conditions within and around the landfill. The model takes into account the actual status of the landfill, hence different methods to gain information about landfill characteristics have to be applied. Finally, assumptions, uncertainties, and limitations of the methodology are discussed, and the need for future research is outlined.
Indentation-derived elastic modulus of multilayer thin films: Effect of unloading induced plasticity
Jamison, Ryan Dale; Shen, Yu -Lin
2015-08-13
Nanoindentation is useful for evaluating the mechanical properties, such as elastic modulus, of multilayer thin film materials. A fundamental assumption in the derivation of the elastic modulus from nanoindentation is that the unloading process is purely elastic. In this work, the validity of elastic assumption as it applies to multilayer thin films is studied using the finite element method. The elastic modulus and hardness from the model system are compared to experimental results to show validity of the model. Plastic strain is shown to increase in the multilayer system during the unloading process. Additionally, the indentation-derived modulus of a monolayermore » material shows no dependence on unloading plasticity while the modulus of the multilayer system is dependent on unloading-induced plasticity. Lastly, the cyclic behavior of the multilayer thin film is studied in relation to the influence of unloading-induced plasticity. Furthermore, it is found that several cycles are required to minimize unloading-induced plasticity.« less
Standard cost elements for technology programs
NASA Technical Reports Server (NTRS)
Christensen, Carisa B.; Wagenfuehrer, Carl
1992-01-01
The suitable structure for an effective and accurate cost estimate for general purposes is discussed in the context of a NASA technology program. Cost elements are defined for research, management, and facility-construction portions of technology programs. Attention is given to the mechanisms for insuring the viability of spending programs, and the need for program managers is established for effecting timely fund disbursement. Formal, structures, and intuitive techniques are discussed for cost-estimate development, and cost-estimate defensibility can be improved with increased documentation. NASA policies for cash management are examined to demonstrate the importance of the ability to obligate funds and the ability to cost contracted funds. The NASA approach to consistent cost justification is set forth with a list of standard cost-element definitions. The cost elements reflect the three primary concerns of cost estimates: the identification of major assumptions, the specification of secondary analytic assumptions, and the status of program factors.
42 CFR 51c.203 - Project elements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Project elements. 51c.203 Section 51c.203 Public... SERVICES Grants for Planning and Developing Community Health Centers § 51c.203 Project elements. A project... community health center and the gradual assumption of operational status of the project so that the project...
42 CFR 56.203 - Project elements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Project elements. 56.203 Section 56.203 Public... SERVICES Grants for Planning and Developing Migrant Health Centers § 56.203 Project elements. A project for... gradual assumption of operational status of the project so that the project will, in the judgment of the...
42 CFR 56.203 - Project elements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Project elements. 56.203 Section 56.203 Public... SERVICES Grants for Planning and Developing Migrant Health Centers § 56.203 Project elements. A project for... gradual assumption of operational status of the project so that the project will, in the judgment of the...
42 CFR 56.203 - Project elements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Project elements. 56.203 Section 56.203 Public... SERVICES Grants for Planning and Developing Migrant Health Centers § 56.203 Project elements. A project for... gradual assumption of operational status of the project so that the project will, in the judgment of the...
42 CFR 56.203 - Project elements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Project elements. 56.203 Section 56.203 Public... SERVICES Grants for Planning and Developing Migrant Health Centers § 56.203 Project elements. A project for... gradual assumption of operational status of the project so that the project will, in the judgment of the...
42 CFR 51c.203 - Project elements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Project elements. 51c.203 Section 51c.203 Public... SERVICES Grants for Planning and Developing Community Health Centers § 51c.203 Project elements. A project... community health center and the gradual assumption of operational status of the project so that the project...
42 CFR 51c.203 - Project elements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Project elements. 51c.203 Section 51c.203 Public... SERVICES Grants for Planning and Developing Community Health Centers § 51c.203 Project elements. A project... community health center and the gradual assumption of operational status of the project so that the project...
42 CFR 51c.203 - Project elements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Project elements. 51c.203 Section 51c.203 Public... SERVICES Grants for Planning and Developing Community Health Centers § 51c.203 Project elements. A project... community health center and the gradual assumption of operational status of the project so that the project...
42 CFR 51c.203 - Project elements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Project elements. 51c.203 Section 51c.203 Public... SERVICES Grants for Planning and Developing Community Health Centers § 51c.203 Project elements. A project... community health center and the gradual assumption of operational status of the project so that the project...
42 CFR 56.203 - Project elements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Project elements. 56.203 Section 56.203 Public... SERVICES Grants for Planning and Developing Migrant Health Centers § 56.203 Project elements. A project for... gradual assumption of operational status of the project so that the project will, in the judgment of the...
UFO - The Universal FEYNRULES Output
NASA Astrophysics Data System (ADS)
Degrande, Céline; Duhr, Claude; Fuks, Benjamin; Grellscheid, David; Mattelaer, Olivier; Reiter, Thomas
2012-06-01
We present a new model format for automatized matrix-element generators, the so-called Universal FEYNRULES Output (UFO). The format is universal in the sense that it features compatibility with more than one single generator and is designed to be flexible, modular and agnostic of any assumption such as the number of particles or the color and Lorentz structures appearing in the interaction vertices. Unlike other model formats where text files need to be parsed, the information on the model is encoded into a PYTHON module that can easily be linked to other computer codes. We then describe an interface for the MATHEMATICA package FEYNRULES that allows for an automatic output of models in the UFO format.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Portable Life Support Subsystem Thermal Hydraulic Performance Analysis
NASA Technical Reports Server (NTRS)
Barnes, Bruce; Pinckney, John; Conger, Bruce
2010-01-01
This paper presents the current state of the thermal hydraulic modeling efforts being conducted for the Constellation Space Suit Element (CSSE) Portable Life Support Subsystem (PLSS). The goal of these efforts is to provide realistic simulations of the PLSS under various modes of operation. The PLSS thermal hydraulic model simulates the thermal, pressure, flow characteristics, and human thermal comfort related to the PLSS performance. This paper presents modeling approaches and assumptions as well as component model descriptions. Results from the models are presented that show PLSS operations at steady-state and transient conditions. Finally, conclusions and recommendations are offered that summarize results, identify PLSS design weaknesses uncovered during review of the analysis results, and propose areas for improvement to increase model fidelity and accuracy.
NASA Astrophysics Data System (ADS)
Wang, W.; Liu, J.
2016-12-01
Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.
Source Pulse Estimation of Mine Shock by Blind Deconvolution
NASA Astrophysics Data System (ADS)
Makowski, R.
The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.
Exploration decisions and firms in the mineral industries
Attanasi, E.D.
1981-01-01
The purpose of this paper is to demonstrate how physical characteristics of deposits and results of past exploration enter future exploration decisions. A proposed decision model is presented that is consistent with a set of primitive probabilistic assumptions associated with deposit size distributions and discoverability. Analysis of optimal field exploration strategy showed the likely firm responses to alternative exploration taxes and effects on the distribution of future discoveries. Examination of the probabilistic elements of the decision model indicates that changes in firm expectations associated with the distribution of deposits cannot be totally offset by changes in economic variables. ?? 1981.
Vande Geest, Jonathan P; Simon, B R; Rigby, Paul H; Newberg, Tyler P
2011-04-01
Finite element models (FEMs) including characteristic large deformations in highly nonlinear materials (hyperelasticity and coupled diffusive/convective transport of neutral mobile species) will allow quantitative study of in vivo tissues. Such FEMs will provide basic understanding of normal and pathological tissue responses and lead to optimization of local drug delivery strategies. We present a coupled porohyperelastic mass transport (PHEXPT) finite element approach developed using a commercially available ABAQUS finite element software. The PHEXPT transient simulations are based on sequential solution of the porohyperelastic (PHE) and mass transport (XPT) problems where an Eulerian PHE FEM is coupled to a Lagrangian XPT FEM using a custom-written FORTRAN program. The PHEXPT theoretical background is derived in the context of porous media transport theory and extended to ABAQUS finite element formulations. The essential assumptions needed in order to use ABAQUS are clearly identified in the derivation. Representative benchmark finite element simulations are provided along with analytical solutions (when appropriate). These simulations demonstrate the differences in transient and steady state responses including finite deformations, total stress, fluid pressure, relative fluid, and mobile species flux. A detailed description of important model considerations (e.g., material property functions and jump discontinuities at material interfaces) is also presented in the context of finite deformations. The ABAQUS-based PHEXPT approach enables the use of the available ABAQUS capabilities (interactive FEM mesh generation, finite element libraries, nonlinear material laws, pre- and postprocessing, etc.). PHEXPT FEMs can be used to simulate the transport of a relatively large neutral species (negligible osmotic fluid flux) in highly deformable hydrated soft tissues and tissue-engineered materials.
NASA Astrophysics Data System (ADS)
Oh, Sehyeong; Lee, Boogeon; Park, Hyungmin; Choi, Haecheon
2017-11-01
We investigate a hovering rhinoceros beetle using numerical simulation and blade element theory. Numerical simulations are performed using an immersed boundary method. In the simulation, the hindwings are modeled as a rigid flat plate, and three-dimensionally scanned elytra and body are used. The results of simulation indicate that the lift force generated by the hindwings alone is sufficient to support the weight, and the elytra generate negligible lift force. Considering the hindwings only, we present a blade element model based on quasi-steady assumptions to identify the mechanisms of aerodynamic force generation and power expenditure in the hovering flight of a rhinoceros beetle. We show that the results from the present blade element model are in excellent agreement with numerical ones. Based on the current blade element model, we find the optimal wing kinematics minimizing the aerodynamic power requirement using a hybrid optimization algorithm combining a clustering genetic algorithm with a gradient-based optimizer. We show that the optimal wing kinematics reduce the aerodynamic power consumption, generating enough lift force to support the weight. This research was supported by a Grant to Bio-Mimetic Robot Research Center Funded by Defense Acquisition Program Administration, and by Agency for Defense Development (UD130070ID) and NRF-2016R1E1A1A02921549 of the MSIP of Korea.
Dong, Xingjian; Peng, Zhike; Hua, Hongxing; Meng, Guang
2014-01-01
An efficient spectral element (SE) with electric potential degrees of freedom (DOF) is proposed to investigate the static electromechanical responses of a piezoelectric bimorph for its actuator and sensor functions. A sublayer model based on the piecewise linear approximation for the electric potential is used to describe the nonlinear distribution of electric potential through the thickness of the piezoelectric layers. An equivalent single layer (ESL) model based on first-order shear deformation theory (FSDT) is used to describe the displacement field. The Legendre orthogonal polynomials of order 5 are used in the element interpolation functions. The validity and the capability of the present SE model for investigation of global and local responses of the piezoelectric bimorph are confirmed by comparing the present solutions with those obtained from coupled 3-D finite element (FE) analysis. It is shown that, without introducing any higher-order electric potential assumptions, the current method can accurately describe the distribution of the electric potential across the thickness even for a rather thick bimorph. It is revealed that the effect of electric potential is significant when the bimorph is used as sensor while the effect is insignificant when the bimorph is used as actuator, and therefore, the present study may provide a better understanding of the nonlinear induced electric potential for bimorph sensor and actuator. PMID:24561399
Motion analysis study on sensitivity of finite element model of the cervical spine to geometry.
Zafarparandeh, Iman; Erbulut, Deniz U; Ozer, Ali F
2016-07-01
Numerous finite element models of the cervical spine have been proposed, with exact geometry or with symmetric approximation in the geometry. However, few researches have investigated the sensitivity of predicted motion responses to the geometry of the cervical spine. The goal of this study was to evaluate the effect of symmetric assumption on the predicted motion by finite element model of the cervical spine. We developed two finite element models of the cervical spine C2-C7. One model was based on the exact geometry of the cervical spine (asymmetric model), whereas the other was symmetric (symmetric model) about the mid-sagittal plane. The predicted range of motion of both models-main and coupled motions-was compared with published experimental data for all motion planes under a full range of loads. The maximum differences between the asymmetric model and symmetric model predictions for the principal motion were 31%, 78%, and 126% for flexion-extension, right-left lateral bending, and right-left axial rotation, respectively. For flexion-extension and lateral bending, the minimum difference was 0%, whereas it was 2% for axial rotation. The maximum coupled motions predicted by the symmetric model were 1.5° axial rotation and 3.6° lateral bending, under applied lateral bending and axial rotation, respectively. Those coupled motions predicted by the asymmetric model were 1.6° axial rotation and 4° lateral bending, under applied lateral bending and axial rotation, respectively. In general, the predicted motion response of the cervical spine by the symmetric model was in the acceptable range and nonlinearity of the moment-rotation curve for the cervical spine was properly predicted. © IMechE 2016.
A comparison of viscoelastic damping models
NASA Technical Reports Server (NTRS)
Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.
1993-01-01
Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hulikal, Srivatsan; Lapusta, Nadia; Bhattacharya, Kaushik
2018-07-01
Friction in static and sliding contact of rough surfaces is important in numerous physical phenomena. We seek to understand macroscopically observed static and sliding contact behavior as the collective response of a large number of microscopic asperities. To that end, we build on Hulikal et al. (2015) and develop an efficient numerical framework that can be used to investigate how the macroscopic response of multiple frictional contacts depends on long-range elastic interactions, different constitutive assumptions about the deforming contacts and their local shear resistance, and surface roughness. We approximate the contact between two rough surfaces as that between a regular array of discrete deformable elements attached to a elastic block and a rigid rough surface. The deformable elements are viscoelastic or elasto/viscoplastic with a range of relaxation times, and the elastic interaction between contacts is long-range. We find that the model reproduces the main macroscopic features of evolution of contact and friction for a range of constitutive models of the elements, suggesting that macroscopic frictional response is robust with respect to the microscopic behavior. Viscoelasticity/viscoplasticity contributes to the increase of friction with contact time and leads to a subtle history dependence. Interestingly, long-range elastic interactions only change the results quantitatively compared to the meanfield response. The developed numerical framework can be used to study how specific observed macroscopic behavior depends on the microscale assumptions. For example, we find that sustained increase in the static friction coefficient during long hold times suggests viscoelastic response of the underlying material with multiple relaxation time scales. We also find that the experimentally observed proportionality of the direct effect in velocity jump experiments to the logarithm of the velocity jump points to a complex material-dependent shear resistance at the microscale.
Incorporating concentration dependence in stable isotope mixing models.
Phillips, Donald L; Koch, Paul L
2002-01-01
Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model assumes that the proportional contribution of a source to a mixture is the same for both elements (e.g., C, N). This may be a reasonable assumption if the concentrations are similar among all sources. However, one source is often particularly rich or poor in one element (e.g., N), which logically leads to a proportionate increase or decrease in the contribution of that source to the mixture for that element relative to the other element (e.g., C). We have developed a concentration-weighted linear mixing model, which assumes that for each element, a source's contribution is proportional to the contributed mass times the elemental concentration in that source. The model is outlined for two elements and three sources, but can be generalized to n elements and n+1 sources. Sensitivity analyses for C and N in three sources indicated that varying the N concentration of just one source had large and differing effects on the estimated source contributions of mass, C, and N. The same was true for a case study of bears feeding on salmon, moose, and N-poor plants. In this example, the estimated biomass contribution of salmon from the concentration-weighted model was markedly less than the standard model estimate. Application of the model to a captive feeding study of captive mink fed on salmon, lean beef, and C-rich, N-poor beef fat reproduced very closely the known dietary proportions, whereas the standard model failed to yield a set of positive source proportions. Use of this concentration-weighted model is recommended whenever the elemental concentrations vary substantially among the sources, which may occur in a variety of ecological and geochemical applications of stable isotope analysis. Possible examples besides dietary and food web studies include stable isotope analysis of water sources in soils, plants, or water bodies; geological sources for soils or marine systems; decomposition and soil organic matter dynamics, and tracing animal migration patterns. A spreadsheet for performing the calculations for this model is available at http://www.epa.gov/wed/pages/models.htm.
NASA Astrophysics Data System (ADS)
Liu, Qimao
2018-02-01
This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.
Yang, X I A; Meneveau, C
2017-04-13
In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Toward transient finite element simulation of thermal deformation of machine tools in real-time
NASA Astrophysics Data System (ADS)
Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg
2018-01-01
Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.
A Mixed Multi-Field Finite Element Formulation for Thermopiezoelectric Composite Shells
NASA Technical Reports Server (NTRS)
Lee, Ho-Jun; Saravanos, Dimitris A.
1999-01-01
Analytical formulations are presented which account for the coupled mechanical, electrical, and thermal response of piezoelectric composite shell structures. A new mixed multi-field laminate theory is developed which combines "single layer" assumptions for the displacements along with layerwise fields for the electric potential and temperature. This laminate theory is formulated using curvilinear coordinates and is based on the principles of linear thermopiezoelectricity. The mechanics have the inherent capability to explicitly model both the active and sensory responses of piezoelectric composite shells in thermal environment. Finite element equations are derived and implemented for an eight-noded shell element. Numerical studies are conducted to investigate both the sensory and active responses of piezoelectric composite shell structures subjected to thermal loads. Results for a cantilevered plate with an attached piezoelectric layer are com- pared with corresponding results from a commercial finite element code and a previously developed program. Additional studies are conducted on a cylindrical shell with an attached piezoelectric layer to demonstrate capabilities to achieve thermal shape control on curved piezoelectric structures.
Factors influencing perceived angular velocity.
Kaiser, M K; Calderone, J B
1991-11-01
The assumption that humans are able to perceive and process angular kinematics is critical to many structure-from-motion and optical flow models. The current studies investigate this sensitivity, and examine several factors likely to influence angular velocity perception. In particular, three factors are considered: (1) the extent to which perceived angular velocity is determined by edge transitions of surface elements, (2) the extent to which angular velocity estimates are influenced by instantaneous linear velocities of surface elements, and (3) whether element-velocity effects are related to three-dimensional (3-D) tangential velocities or to two-dimensional (2-D) image velocities. Edge-transition rate biased angular velocity estimates only when edges were highly salient. Element velocities influenced perceived angular velocity; this bias was related to 2-D image velocity rather than 3-D tangential velocity. Despite these biases, however, judgments were most strongly determined by the true angular velocity. Sensitivity to this higher order motion parameter was surprisingly good, for rotations both in depth (y-axis) and parallel to the line of sight (z-axis).
Fractional Yields Inferred from Halo and Thick Disk Stars
NASA Astrophysics Data System (ADS)
Caimmi, R.
2013-12-01
Linear [Q/H]-[O/H] relations, Q = Na, Mg, Si, Ca, Ti, Cr, Fe, Ni, are inferred from a sample (N=67) of recently studied FGK-type dwarf stars in the solar neighbourhood including different populations (Nissen and Schuster 2010, Ramirez et al. 2012), namely LH (N=24, low-α halo), HH (N=25, high-α halo), KD (N=16, thick disk), and OL (N=2, globular cluster outliers). Regression line slope and intercept estimators and related variance estimators are determined. With regard to the straight line, [Q/H]=a_{Q}[O/H]+b_{Q}, sample stars are displayed along a "main sequence", [Q,O] = [a_{Q},b_{Q},Δ b_{Q}], leaving aside the two OL stars, which, in most cases (e.g. Na), lie outside. The unit slope, a_{Q}=1, implies Q is a primary element synthesised via SNII progenitors in the presence of a universal stellar initial mass function (defined as simple primary element). In this respect, Mg, Si, Ti, show hat a_{Q}=1 within ∓2hatσ_ {hat a_{Q}}; Cr, Fe, Ni, within ∓3hatσ_{hat a_{Q}}; Na, Ca, within ∓ rhatσ_{hat a_{Q}}, r>3. The empirical, differential element abundance distributions are inferred from LH, HH, KD, HA = HH + KD subsamples, where related regression lines represent their theoretical counterparts within the framework of simple MCBR (multistage closed box + reservoir) chemical evolution models. Hence, the fractional yields, hat{p}_{Q}/hat{p}_{O}, are determined and (as an example) a comparison is shown with their theoretical counterparts inferred from SNII progenitor nucleosynthesis under the assumption of a power-law stellar initial mass function. The generalized fractional yields, C_{Q}=Z_{Q}/Z_{O}^{a_{Q}}, are determined regardless of the chemical evolution model. The ratio of outflow to star formation rate is compared for different populations in the framework of simple MCBR models. The opposite situation of element abundance variation entirely due to cosmic scatter is also considered under reasonable assumptions. The related differential element abundance distribution fits to the data, as well as its counterpart inferred in the opposite limit of instantaneous mixing in the presence of chemical evolution, while the latter is preferred for HA subsample.
Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications
NASA Astrophysics Data System (ADS)
Ma, Q.
2015-12-01
The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS and IOS models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.
Seal Analysis for the Ares-I Upper Stage Fuel Tank Manhole Cover
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Wingate, Robert J.
2010-01-01
Techniques for studying the performance of Naflex pressure-assisted seals in the Ares-I Upper Stage liquid hydrogen tank manhole cover seal joint are explored. To assess the feasibility of using the identical seal design for the Upper Stage as was used for the Space Shuttle External Tank manhole covers, a preliminary seal deflection analysis using the ABAQUS commercial finite element software is employed. The ABAQUS analyses are performed using three-dimensional symmetric wedge finite element models. This analysis technique is validated by first modeling a heritage External Tank liquid hydrogen tank manhole cover joint and correlating the results to heritage test data. Once the technique is validated, the Upper Stage configuration is modeled. The Upper Stage analyses are performed at 1.4 times the expected pressure to comply with the Constellation Program factor of safety requirement on joint separation. Results from the analyses performed with the External Tank and Upper Stage models demonstrate the effects of several modeling assumptions on the seal deflection. The analyses for Upper Stage show that the integrity of the seal is successfully maintained.
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2015-12-01
Previous studies have identified four major sources of predictive uncertainty in modeling land biogeochemical (BGC) processes: (1) imperfect initial conditions (e.g., assumption of preindustrial equilibrium); (2) imperfect boundary conditions (e.g., climate forcing data); (3) parameterization (type I equifinality); and (4) model structure (type II equifinality). As if that were not enough to cause substantial sleep loss in modelers, we propose here a fifth element of uncertainty that results from implementation ambiguity that occurs when the model's mathematical description is translated into computational code. We demonstrate the implementation ambiguity using the example of nitrogen down regulation, a necessary process in modeling carbon-climate feedbacks. We show that, depending on common land BGC model interpretations of the governing equations for mineral nitrogen, there are three different implementations of nitrogen down regulation. We coded these three implementations in the ACME land model (ALM), and explored how they lead to different preindustrial and contemporary land biogeochemical states and fluxes. We also show how this implementation ambiguity can lead to different carbon-climate feedback estimates across the RCP scenarios. We conclude by suggesting how to avoid such implementation ambiguity in ESM BGC models.
A Dynamic Finite Element Analysis of Human Foot Complex in the Sagittal Plane during Level Walking
Qian, Zhihui; Ren, Lei; Ding, Yun; Hutchinson, John R.; Ren, Luquan
2013-01-01
The objective of this study is to develop a computational framework for investigating the dynamic behavior and the internal loading conditions of the human foot complex during locomotion. A subject-specific dynamic finite element model in the sagittal plane was constructed based on anatomical structures segmented from medical CT scan images. Three-dimensional gait measurements were conducted to support and validate the model. Ankle joint forces and moment derived from gait measurements were used to drive the model. Explicit finite element simulations were conducted, covering the entire stance phase from heel-strike impact to toe-off. The predicted ground reaction forces, center of pressure, foot bone motions and plantar surface pressure showed reasonably good agreement with the gait measurement data over most of the stance phase. The prediction discrepancies can be explained by the assumptions and limitations of the model. Our analysis showed that a dynamic FE simulation can improve the prediction accuracy in the peak plantar pressures at some parts of the foot complex by 10%–33% compared to a quasi-static FE simulation. However, to simplify the costly explicit FE simulation, the proposed model is confined only to the sagittal plane and has a simplified representation of foot structure. The dynamic finite element foot model proposed in this study would provide a useful tool for future extension to a fully muscle-driven dynamic three-dimensional model with detailed representation of all major anatomical structures, in order to investigate the structural dynamics of the human foot musculoskeletal system during normal or even pathological functioning. PMID:24244500
A dynamic finite element analysis of human foot complex in the sagittal plane during level walking.
Qian, Zhihui; Ren, Lei; Ding, Yun; Hutchinson, John R; Ren, Luquan
2013-01-01
The objective of this study is to develop a computational framework for investigating the dynamic behavior and the internal loading conditions of the human foot complex during locomotion. A subject-specific dynamic finite element model in the sagittal plane was constructed based on anatomical structures segmented from medical CT scan images. Three-dimensional gait measurements were conducted to support and validate the model. Ankle joint forces and moment derived from gait measurements were used to drive the model. Explicit finite element simulations were conducted, covering the entire stance phase from heel-strike impact to toe-off. The predicted ground reaction forces, center of pressure, foot bone motions and plantar surface pressure showed reasonably good agreement with the gait measurement data over most of the stance phase. The prediction discrepancies can be explained by the assumptions and limitations of the model. Our analysis showed that a dynamic FE simulation can improve the prediction accuracy in the peak plantar pressures at some parts of the foot complex by 10%-33% compared to a quasi-static FE simulation. However, to simplify the costly explicit FE simulation, the proposed model is confined only to the sagittal plane and has a simplified representation of foot structure. The dynamic finite element foot model proposed in this study would provide a useful tool for future extension to a fully muscle-driven dynamic three-dimensional model with detailed representation of all major anatomical structures, in order to investigate the structural dynamics of the human foot musculoskeletal system during normal or even pathological functioning.
NASA Technical Reports Server (NTRS)
King, James; Nickling, William G.; Gillies, John A.
2005-01-01
The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
In order to establish transmitter power and receiver sensitivity levels at frequencies above 10 GHz, the designers of earth-satellite telecommunication systems are interested in cumulative rain fade statistics at variable path orientations, elevation angles, climatological regions, and frequencies. They are also interested in establishing optimum space diversity performance parameters. In this work are examined the many elements involved in the employment of single non-attenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain attenuation modeling. Suggestions are made for improving present methods.
Compressive strength of delaminated aerospace composites.
Butler, Richard; Rhead, Andrew T; Liu, Wenli; Kontis, Nikolaos
2012-04-28
An efficient analytical model is described which predicts the value of compressive strain below which buckle-driven propagation of delaminations in aerospace composites will not occur. An extension of this efficient strip model which accounts for propagation transverse to the direction of applied compression is derived. In order to provide validation for the strip model a number of laminates were artificially delaminated producing a range of thin anisotropic sub-laminates made up of 0°, ±45° and 90° plies that displayed varied buckling and delamination propagation phenomena. These laminates were subsequently subject to experimental compression testing and nonlinear finite element analysis (FEA) using cohesive elements. Comparison of strip model results with those from experiments indicates that the model can conservatively predict the strain at which propagation occurs to within 10 per cent of experimental values provided (i) the thin-film assumption made in the modelling methodology holds and (ii) full elastic coupling effects do not play a significant role in the post-buckling of the sub-laminate. With such provision, the model was more accurate and produced fewer non-conservative results than FEA. The accuracy and efficiency of the model make it well suited to application in optimum ply-stacking algorithms to maximize laminate strength.
Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M
2017-03-01
Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.
Efficient Computation of Info-Gap Robustness for Finite Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less
Schneider, Arnaud R; Ponthieu, Marie; Cancès, Benjamin; Conreux, Alexandra; Morvan, Xavier; Gommeaux, Maxime; Marin, Béatrice; Benedetti, Marc F
2016-06-01
Trace element (TE) speciation modelling in soil solution is controlled by the assumptions made about the soil solution composition. To evaluate this influence, different assumptions using Visual MINTEQ were tested and compared to measurements of free TE concentrations. The soil column Donnan membrane technique (SC-DMT) was used to estimate the free TE (Cd, Cu, Ni, Pb and Zn) concentrations in six acidic soil solutions. A batch technique using DAX-8 resin was used to fractionate the dissolved organic matter (DOM) into four fractions: humic acids (HA), fulvic acids (FA), hydrophilic acids (Hy) and hydrophobic neutral organic matter (HON). To model TE speciation, particular attention was focused on the hydrous manganese oxides (HMO) and the Hy fraction, ligands not considered in most of the TE speciation modelling studies in soil solution. In this work, the model predictions of free ion activities agree with the experimental results. The knowledge of the FA fraction seems to be very useful, especially in the case of high DOM content, for more accurately representing experimental data. Finally, the role of the manganese oxides and of the Hy fraction on TE speciation was identified and, depending on the physicochemical conditions of the soil solution, should be considered in future studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes
NASA Astrophysics Data System (ADS)
Zhou, Dong
In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user subroutine DISP and URDFIL of ABAQUS, respectively, while constitutive formulations of the FEM model are coded and implemented in UMAT. The results of the simulations are compared to experiments. This model verified the validity of Winter's two-phase model and Taylor's uniform stress assumption, explored substructure evolution and "intrinsic" behavior in substructures and successfully simulated the process of PSB band formation and propagation.
Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G
2018-04-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.
Davis, Hayley; Ritchie, Euan G.; Avitabile, Sarah; Doherty, Tim
2018-01-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species’ probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary. PMID:29765661
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriegler, Elmar; Edmonds, James A.; Hallegatte, Stephane
2014-04-01
The paper presents the concept of shared climate policy assumptions as an important element of the new scenario framework. Shared climate policy assumptions capture key climate policy dimensions such as the type and scale of mitigation and adaptation measures. They are not specified in the socio-economic reference pathways, and therefore introduce an important third dimension to the scenario matrix architecture. Climate policy assumptions will have to be made in any climate policy scenario, and can have a significant impact on the scenario description. We conclude that a meaningful set of shared climate policy assumptions is useful for grouping individual climatemore » policy analyses and facilitating their comparison. Shared climate policy assumptions should be designed to be policy relevant, and as a set to be broad enough to allow a comprehensive exploration of the climate change scenario space.« less
Assessing theoretical uncertainties in fission barriers of superheavy nuclei
Agbemava, S. E.; Afanasjev, A. V.; Ray, D.; ...
2017-05-26
Here, theoretical uncertainties in the predictions of inner fission barrier heights in superheavy elements have been investigated in a systematic way for a set of state-of-the-art covariant energy density functionals which represent major classes of the functionals used in covariant density functional theory. They differ in basic model assumptions and fitting protocols. Both systematic and statistical uncertainties have been quantified where the former turn out to be larger. Systematic uncertainties are substantial in superheavy elements and their behavior as a function of proton and neutron numbers contains a large random component. The benchmarking of the functionals to the experimental datamore » on fission barriers in the actinides allows to reduce the systematic theoretical uncertainties for the inner fission barriers of unknown superheavy elements. However, even then they on average increase on moving away from the region where benchmarking has been performed. In addition, a comparison with the results of non-relativistic approaches is performed in order to define full systematic theoretical uncertainties over the state-of-the-art models. Even for the models benchmarked in the actinides, the difference in the inner fission barrier height of some superheavy elements reaches $5-6$ MeV. This uncertainty in the fission barrier heights will translate into huge (many tens of the orders of magnitude) uncertainties in the spontaneous fission half-lives.« less
Line-spring model for surface cracks in a Reissner plate
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1981-01-01
In this paper the line-spring model developed by Rice and Levy for a surface crack in elastic plates is reconsidered. The problem is formulated by using Reissner's plate bending theory. For the plane strain problem of a strip containing an edge crack and subjected to tension and bending new expressions for stress intensity factors are used which are valid up to a depth-to-thickness ratio of 0.8. The stress intensity factors for a semi-elliptic and a rectangular crack are calculated. Considering the simplicity of the technique and the severity of the underlying assumptions, the results compare rather well with the existing finite element solutions.
Zinc isotope anomalies in Allende meteorite inclusions
NASA Technical Reports Server (NTRS)
Loss, R. D.; Lugmair, G. W.
1990-01-01
The isotopic compositions of Zn, Cr, Ti, and Ca have been measured in a number of CAIs from the Allende meteorite. The aim was to test astrophysical models which predict large excesses of Zn-66 to accompany excesses in the neutron-rich isotopes of Ca, Ti, Cr, and Ni. Some of the CAIs show clearly resolved but small excesses for Zn-66 which are at least an order of magnitude smaller than predicted. This result may simply reflect the volatility and chemical behavior of Zn as compared to the other (more refractory) anomalous elements found in these samples. Alternatively, revision of parameters and assumptions used for the model calculations may be required.
Building equity in: strategies for integrating equity into modelling for a 1.5°C world.
Sonja, Klinsky; Harald, Winkler
2018-05-13
Emission pathways consistent with limiting temperature increase to 1.5°C raise pressing questions from an equity perspective. These pathways would limit impacts and benefit vulnerable communities but also present trade-offs that could increase inequality. Meanwhile, rapid mitigation could exacerbate political debates in which equity has played a central role. In this paper, we first develop a set of elements we suggest are essential for evaluating the equity implications of policy actions consistent with 1.5°C. These elements include (i) assess climate impacts, adaptation, loss and damage; (ii) be sensitive to context; (iii) compare costs of mitigation and adaptation policy action; (iv) incorporate human development and poverty; (v) integrate inequality dynamics; and (vi) be clear about normative assumptions and responsive to users. We then assess the ability of current modelling practices to address each element, focusing on global integrated assessment models augmented by national modelling and scenarios. We find current practices face serious limitations across all six dimensions although the severity of these varies. Finally, based on our assessment we identify strategies that may be best suited for enabling us to generate insights into each of the six elements in the context of assessing pathways for a 1.5°C world.This article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Line, Michael
The field of transiting exoplanet atmosphere characterization has grown considerably over the past decade given the wealth of photometric and spectroscopic data from the Hubble and Spitzer space telescopes. In order to interpret these data, atmospheric models combined with Bayesian approaches are required. From spectra, these approaches permit us to infer fundamental atmospheric properties and how their compositions can relate back to planet formation. However, such approaches must make a wide range of assumptions regarding the physics/parameterizations included in the model atmospheres. There has yet to be a comprehensive investigation exploring how these model assumptions influence our interpretations of exoplanetary spectra. Understanding the impact of these assumptions is especially important since the James Webb Space Telescope (JWST) is expected to invest a substantial portion of its time observing transiting planet atmospheres. It is therefore prudent to optimize and enhance our tools to maximize the scientific return from the revolutionary data to come. The primary goal of the proposed work is to determine the pieces of information we can robustly learn from transiting planet spectra as obtained by JWST and other future, space-based platforms, by investigating commonly overlooked model assumptions. We propose to explore the following effects and how they impact our ability to infer exoplanet atmospheric properties: 1. Stellar/Planetary Uncertainties: Transit/occultation eclipse depths and subsequent planetary spectra are measured relative to their host stars. How do stellar uncertainties, on radius, effective temperature, metallicity, and gravity, as well as uncertainties in the planetary radius and gravity, propagate into the uncertainties on atmospheric composition and thermal structure? Will these uncertainties significantly bias our atmospheric interpretations? Is it possible to use the relative measurements of the planetary spectra to provide additional constraints on the stellar properties? 2. The "1D" Assumption: Atmospheres are inherently three-dimensional. Many exoplanet atmosphere models, especially within retrieval frameworks, assume 1D physics and chemistry when interpreting spectra. How does this "1D" atmosphere assumption bias our interpretation of exoplanet spectra? Do we have to consider global temperature variations such as day-night contrasts or hot spots? What about spatially inhomogeneous molecular abundances and clouds? How will this change our interpretations of phase resolved spectra? 3. Clouds/Hazes: Understanding how clouds/hazes impact transit spectra is absolutely critical if we are to obtain proper estimates of basic atmospheric quantities. How do the assumptions in cloud physics bias our inferences of molecular abundances in transmission? What kind of data (wavelengths, signal-to-noise, resolution) do we need to infer cloud composition, vertical extent, spatial distribution (patchy or global), and size distributions? The proposed work is relevant and timely to the scope of the NASA Exoplanet Research program. The proposed work aims to further develop the critical theoretical modeling tools required to rigorously interpret transiting exoplanet atmosphere data in order to maximize the science return from JWST and beyond. This work will serve as a benchmark study for defining the data (wavelength ranges, signal-to-noises, and resolutions) required from a modeling perspective to "characterize exoplanets and their atmospheres in order to inform target and operational choices for current NASA missions, and/or targeting, operational, and formulation data for future NASA observatories". Doing so will allow us to better "understand the chemical and physical processes of exoplanets (their atmospheres)" which will ultimately " improve understanding of the origins of exoplanetary systems" through robust planetary elemental abundance determinations.
Curtis H. Flather; Kenneth R. Wilson; Denis J. Dean; William C. McComb
1997-01-01
Mapping of biodiversity elements to expose gaps in. conservation networks has become a common strategy in nature-reserve design. We review a set of critical assumptions and issues that influence the interpretation and implementation of gap analysis, including: (1) the assumption that a subset of taxa can be used to indicate overall diversity patterns, and (2) the...
ERIC Educational Resources Information Center
McDonough, Sharon; Brandenburg, Robyn
2012-01-01
The role of university-based mentors providing support for pre-service teachers (PSTs) on professional experience placements has long been an element of teacher education programs. These mentors often face challenging situations as they confront their own assumptions about teaching and learning, while also supporting PSTs who may be experiencing…
Forced in-plane vibration of a thick ring on a unilateral elastic foundation
NASA Astrophysics Data System (ADS)
Wang, Chunjian; Ayalew, Beshah; Rhyne, Timothy; Cron, Steve; Dailliez, Benoit
2016-10-01
Most existing studies of a deformable ring on elastic foundation rely on the assumption of a linear foundation. These assumptions are insufficient in cases where the foundation may have a unilateral stiffness that vanishes in compression or tension such as in non-pneumatic tires and bushing bearings. This paper analyzes the in-plane dynamics of such a thick ring on a unilateral elastic foundation, specifically, on a two-parameter unilateral elastic foundation, where the stiffness of the foundation is treated as linear in the circumferential direction but unilateral (i.e. collapsible or tensionless) in the radial direction. The thick ring is modeled as an orthotropic and extensible circular Timoshenko beam. An arbitrarily distributed time-varying in-plane force is considered as the excitation. The Equations of Motion are explicitly derived and a solution method is proposed that uses an implicit Newmark scheme for the time domain solution and an iterative compensation approach to determine the unilateral zone of the foundation at each time step. The dynamic axle force transmission is also analyzed. Illustrative forced vibration responses obtained from the proposed model and solution method are compared with those obtained from a finite element model.
NASA Technical Reports Server (NTRS)
Karam, Mostafa A.; Amar, Faouzi; Fung, Adrian K.
1993-01-01
The Wave Scattering Research Center at the University of Texas at Arlington has developed a scattering model for forest or vegetation, based on the theory of electromagnetic-wave scattering in random media. The model generalizes the assumptions imposed by earlier models, and compares well with measurements from several forest canopies. This paper gives a description of the model. It also indicates how the model elements are integrated to obtain the scattering characteristics of different forest canopies. The scattering characteristics may be displayed in the form of polarimetric signatures, represented by like- and cross-polarized scattering coefficients, for an elliptically-polarized wave, or in the form of signal-distribution curves. Results illustrating both types of scattering characteristics are given.
Comparison of distributed acceleration and standard models of cosmic-ray transport
NASA Technical Reports Server (NTRS)
Letaw, J. R.; Silberberg, R.; Tsao, C. H.
1995-01-01
Recent cosmic-ray abundance measurements for elements in the range 3 less than or equal to Z less than or equal to 28 and energies 10 MeV/n less than or equal to E less than or equal to 1 TeV/n have been analyzed with computer transport modeling. About 500 elemental and isotopic measurements have been explored in this analysis. The transport code includes the effects of ionization losses, nuclear spallation reactions (including those of secondaries), all nuclear decay modes, stripping and attachment of electrons, escape from the Galaxy, weak reacceleration and solar modulation. Four models of reacceleration (with several submodels of various reacceleration strengths) were explored. A chi (exp 2) analysis show that the reacceleration models yield at least equally good fits to the data as the standard propagation model. However, with reacceleration, the ad hoc assumptions of the standard model regarding discontinuities in the energy dependence of the mean path length traversed by cosmic rays, and in the momentum spectrum of the cosmic-ray source spectrum are eliminated. Futhermore, the difficulty between rigidity dependent leakage and energy independent anisotropy below energies of 10(exp 14) eV is alleviated.
NASA Astrophysics Data System (ADS)
Zielnica, J.; Ziółkowski, A.; Cempel, C.
2003-03-01
Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.
NASA Astrophysics Data System (ADS)
Knuth, K. H.
2001-05-01
We consider the application of Bayesian inference to the study of self-organized structures in complex adaptive systems. In particular, we examine the distribution of elements, agents, or processes in systems dominated by hierarchical structure. We demonstrate that results obtained by Caianiello [1] on Hierarchical Modular Systems (HMS) can be found by applying Jaynes' Principle of Group Invariance [2] to a few key assumptions about our knowledge of hierarchical organization. Subsequent application of the Principle of Maximum Entropy allows inferences to be made about specific systems. The utility of the Bayesian method is considered by examining both successes and failures of the hierarchical model. We discuss how Caianiello's original statements suffer from the Mind Projection Fallacy [3] and we restate his assumptions thus widening the applicability of the HMS model. The relationship between inference and statistical physics, described by Jaynes [4], is reiterated with the expectation that this realization will aid the field of complex systems research by moving away from often inappropriate direct application of statistical mechanics to a more encompassing inferential methodology.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Perera, Ricardo; De Roeck, Guido
2008-06-01
This paper develops a sensitivity-based updating method to identify the damage in a tested reinforced concrete (RC) frame modeled with a two-dimensional planar finite element (FE) by minimizing the discrepancies of modal frequencies and mode shapes. In order to reduce the number of unknown variables, a bidimensional damage (element) function is proposed, resulting in a considerable improvement of the optimization performance. For damage identification, a reference FE model of the undamaged frame divided into a few damage functions is firstly obtained and then a rough identification is carried out to detect possible damage locations, which are subsequently refined with new damage functions to accurately identify the damage. From a design point of view, it would be useful to evaluate, in a simplified way, the remaining bending stiffness of cracked beam sections or segments. Hence, an RC damage model based on a static mechanism is proposed to estimate the remnant stiffness of a cracked RC beam segment. The damage model is based on the assumption that the damage effect spreads over a region and the stiffness in the segment changes linearly. Furthermore, the stiffness reduction evaluated using this damage model is compared with the FE updating result. It is shown that the proposed bidimensional damage function is useful in producing a well-conditioned optimization problem and the aforementioned damage model can be used for an approximate stiffness estimation of a cracked beam segment.
Scope of inextensible frame hypothesis in local action analysis of spherical reservoirs
NASA Astrophysics Data System (ADS)
Vinogradov, Yu. I.
2017-05-01
Spherical reservoirs, as objects perfect with respect to their weight, are used in spacecrafts, where thin-walled elements are joined by frames into multifunction structures. The junctions are local, which results in origination of stress concentration regions and the corresponding rigidity problems. The thin-walled elements are reinforced by frame to decrease the stresses in them. To simplify the analysis of the mathematical model of common deformation of the shell (which is a mathematical idealization of the reservoir) and the frame, the assumption that the frame axial line is inextensible is used widely (in particular, in the manual literature). The unjustified use of this assumption significantly distorts the concept of the stress-strain state. In this paper, an example of a lens-shaped structure formed as two spherical shell segments connected by a frame of square profile is used to carry out a numerical comparative analysis of the solutions with and without the inextensible frame hypothesis taken into account. The scope of the hypothesis is shown depending on the structure geometric parameters and the load location degree. The obtained results can be used to determine the stress-strain state of the thin-walled structure with an a priori prescribed error, for example, in research and experimental design of aerospace systems.
Building Blocks of Psychology: on Remaking the Unkept Promises of Early Schools.
Gozli, Davood G; Deng, Wei Sophia
2018-03-01
The appeal and popularity of "building blocks", i.e., simple and dissociable elements of behavior and experience, persists in psychological research. We begin our assessment of this research strategy with an historical review of structuralism (as espoused by E. B. Titchener) and behaviorism (espoused by J. B. Watson and B. F. Skinner), two movements that held the assumption in their attempts to provide a systematic and unified discipline. We point out the ways in which the elementism of the two schools selected, framed, and excluded topics of study. After the historical review, we turn to contemporary literature and highlight the persistence of research into building blocks and the associated framing and exclusions in psychological research. The assumption that complex categories of human psychology can be understood in terms of their elementary components and simplest forms seems indefensible. In specific cases, therefore, reliance on the assumption requires justification. Finally, we review alternative strategies that bypass the commitment to building blocks.
RESPIZZI, STEFANO; COVELLI, ELISABETTA
2015-01-01
The emotional coaching model uses quantitative and qualitative elements to demonstrate some assumptions relevant to new methods of treatment in physical rehabilitation, considering emotional, cognitive and behavioral aspects in patients, whether or not they are sportsmen. Through quantitative tools (Tampa Kinesiophobia Scale, Emotional Interview Test, Previous Re-Injury Test, and reports on test scores) and qualitative tools (training contracts and relationships of emotional alliance or “contagion”), we investigate initial assumptions regarding: the presence of a cognitive and emotional mental state of impasse in patients at the beginning of the rehabilitation pathway; the curative value of the emotional alliance or “emotional contagion” relationship between healthcare provider and patient; the link between the patient’s pathology and type of contact with his own body and emotions; analysis of the psychosocial variables for the prediction of possible cases of re-injury for patients who have undergone or are afraid to undergo reconstruction of the anterior cruciate ligament (ACL). Although this approach is still in the experimental stage, the scores of the administered tests show the possibility of integrating quantitative and qualitative tools to investigate and develop a patient’s physical, mental and emotional resources during the course of his rehabilitation. Furthermore, it seems possible to identify many elements characterizing patients likely to undergo episodes of re-injury or to withdraw totally from sporting activity. In particular, such patients are competitive athletes, who fear or have previously undergone ACL reconstruction. The theories referred to (the transactional analysis theory, self-determination theory) and the tools used demonstrate the usefulness of continuing this research in order to build a shared coaching model treatment aimed at all patients, sportspeople or otherwise, which is not only physical but also emotional, cognitive and behavioral. PMID:26904525
ERIC Educational Resources Information Center
Lin, Crystal Jia-yi
2015-01-01
Idiom transparency refers to how speakers think the meaning of the individual words contributes to the figurative meaning of an idiom as a whole (Gibbs, Nayak, & Cutting, 1989). However, it is not clear how speakers or language learners form their assumptions about an idiom's transparency level. This study set out to discover whether there are…
NASA Astrophysics Data System (ADS)
Lee, Kyu Sang; Gill, Wonpyong
2017-11-01
The dynamic properties, such as the crossing time and time-dependence of the relative density of the four-state haploid coupled discrete-time mutation-selection model, were calculated with the assumption that μ ij = μ ji , where μ ij denotes the mutation rate between the sequence elements, i and j. The crossing time for s = 0 and r 23 = r 42 = 1 in the four-state model became saturated at a large fitness parameter when r 12 > 1, was scaled as a power law in the fitness parameter when r 12 = 1, and diverged when the fitness parameter approached the critical fitness parameter when r 12 < 1, where r ij = μ ij / μ 14.
NASA Astrophysics Data System (ADS)
Ferrara, R.; Leonardi, G.; Jourdan, F.
2013-09-01
A numerical model to predict train-induced vibrations is presented. The dynamic computation considers mutual interactions in vehicle/track coupled systems by means of a finite and discrete elements method. The rail defects and the case of out-of-round wheels are considered. The dynamic interaction between the wheel-sets and the rail is accomplished by using the non-linear Hertzian model with hysteresis damping. A sensitivity analysis is done to evaluate the variables affecting more the maintenance costs. The rail-sleeper contact is assumed extended to an area-defined contact zone, rather than a single-point assumption which fits better real case studies. Experimental validations show how prediction fits well experimental data.
How to make a particular case for person-centred patient care: A commentary on Alexandra Parvan.
Graham, George
2018-06-14
In recent years, a person-centred approach to patient care in cases of mental illness has been promoted as an alternative to a disease orientated approach. Alexandra Parvan's contribution to the person-centred approach serves to motivate an exploration of the approach's most apt metaphysical assumptions. I argue that a metaphysical thesis or assumption about both persons and their uniqueness is an essential element of being person-centred. I apply the assumption to issues such as the disorder/disease distinction and to the continuity of mental health and illness. © 2018 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G., Jr.; Johnson, W. S.
1994-01-01
In this research, a methodology to predict damage initiation, damage growth, fatigue life, and residual strength in titanium matrix composites (TMC) is outlined. Emphasis was placed on micromechanics-based engineering approaches. Damage initiation was predicted using a local effective strain approach. A finite element analysis verified the prevailing assumptions made in the formulation of this model. Damage growth, namely, fiber-bridged matrix crack growth, was evaluated using a fiber bridging (FB) model which accounts for thermal residual stresses. This model combines continuum fracture mechanics and micromechanics analyses yielding stress-intensity factor solutions for fiber-bridged matrix cracks. It is assumed in the FB model that fibers in the wake of the matrix crack are idealized as a closure pressure, and an unknown constant frictional shear stress is assumed to act along the debond length of the bridging fibers. This frictional shear stress was used as a curve fitting parameter to the available experimental data. Fatigue life and post-fatigue residual strength were predicted based on the axial stress in the first intact 0 degree fiber calculated using the FB model and a three-dimensional finite element analysis.
Finite Element-Based Mechanical Assessment of Bone Quality on the Basis of In Vivo Images.
Pahr, Dieter H; Zysset, Philippe K
2016-12-01
Beyond bone mineral density (BMD), bone quality designates the mechanical integrity of bone tissue. In vivo images based on X-ray attenuation, such as CT reconstructions, provide size, shape, and local BMD distribution and may be exploited as input for finite element analysis (FEA) to assess bone fragility. Further key input parameters of FEA are the material properties of bone tissue. This review discusses the main determinants of bone mechanical properties and emphasizes the added value, as well as the important assumptions underlying finite element analysis. Bone tissue is a sophisticated, multiscale composite material that undergoes remodeling but exhibits a rather narrow band of tissue mineralization. Mechanically, bone tissue behaves elastically under physiologic loads and yields by cracking beyond critical strain levels. Through adequate cell-orchestrated modeling, trabecular bone tunes its mechanical properties by volume fraction and fabric. With proper calibration, these mechanical properties may be incorporated in quantitative CT-based finite element analysis that has been validated extensively with ex vivo experiments and has been applied increasingly in clinical trials to assess treatment efficacy against osteoporosis.
Majoros, William H; Ohler, Uwe
2010-12-16
The computational detection of regulatory elements in DNA is a difficult but important problem impacting our progress in understanding the complex nature of eukaryotic gene regulation. Attempts to utilize cross-species conservation for this task have been hampered both by evolutionary changes of functional sites and poor performance of general-purpose alignment programs when applied to non-coding sequence. We describe a new and flexible framework for modeling binding site evolution in multiple related genomes, based on phylogenetic pair hidden Markov models which explicitly model the gain and loss of binding sites along a phylogeny. We demonstrate the value of this framework for both the alignment of regulatory regions and the inference of precise binding-site locations within those regions. As the underlying formalism is a stochastic, generative model, it can also be used to simulate the evolution of regulatory elements. Our implementation is scalable in terms of numbers of species and sequence lengths and can produce alignments and binding-site predictions with accuracy rivaling or exceeding current systems that specialize in only alignment or only binding-site prediction. We demonstrate the validity and power of various model components on extensive simulations of realistic sequence data and apply a specific model to study Drosophila enhancers in as many as ten related genomes and in the presence of gain and loss of binding sites. Different models and modeling assumptions can be easily specified, thus providing an invaluable tool for the exploration of biological hypotheses that can drive improvements in our understanding of the mechanisms and evolution of gene regulation.
NASA Astrophysics Data System (ADS)
Wilson, Cian R.; Spiegelman, Marc; van Keken, Peter E.
2017-02-01
We introduce and describe a new software infrastructure TerraFERMA, the Transparent Finite Element Rapid Model Assembler, for the rapid and reproducible description and solution of coupled multiphysics problems. The design of TerraFERMA is driven by two computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced, and modified in a manner such that the best ideas in computation and Earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high-level problem description (FEniCS), composable solvers for coupled multiphysics problems (PETSc), and an options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an interface that organizes the scientific and computational choices required in a model into a single options file from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible, while still permitting the individual researcher considerable latitude in model construction. TerraFERMA solves partial differential equations using the finite element method. It is particularly well suited for nonlinear problems with complex coupling between components. TerraFERMA is open-source and available at http://terraferma.github.io, which includes links to documentation and example input files.
Dynamic tests of composite panels of an aircraft wing
NASA Astrophysics Data System (ADS)
Splichal, Jan; Pistek, Antonin; Hlinka, Jiri
2015-10-01
The paper describes the analysis of aerospace composite structures under dynamic loading. Today, it is common to use design procedures based on assumption of static loading only, and dynamic loading is rarely assumed and applied in design and certification of aerospace structures. The paper describes the application of dynamic loading for the design of aircraft structures, and the validation of the procedure on a selected structure. The goal is to verify the possibility of reducing the weight through improved design/modelling processes using dynamic loading instead of static loading. The research activity focuses on the modelling and testing of a composite panel representing a local segment of an aircraft wing section, investigating in particular the buckling behavior under dynamic loading. Finite Elements simulation tools are discussed, as well as the advantages of using a digital optical measurement system for the evaluation of the tests. The comparison of the finite element simulations with the results of the tests is presented.
Heat Transfer Issues in Finite Element Analysis of Bounding Accidents in PPCS Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pampin, R.; Karditsas, P.J.
2005-05-15
Modelling of temperature excursions in structures of conceptual power plants during hypothetical worst-case accidents has been performed within the European Power Plant Conceptual Study (PPCS). A new, 3D finite elements (FE) based tool, coupling the different calculations to the same tokamak geometry, has been extensively used to conduct the neutron transport, activation and thermal analyses for all PPCS plant models. During a total loss of cooling, the usual assumption for the bounding accident, passive removal of the decay heat from activated materials depends on conduction and radiation heat exchange between components. This paper presents and discusses results obtained during themore » PPCS bounding accident thermal analyses, examining the following issues: (a) radiation heat exchange between the inner surfaces of the tokamak, (b) the presence of air within the cryostat volume, and the heat flow arising from the circulation pattern provided by temperature differences between various parts, and (c) the thermal conductivity of pebble beds, and its degradation due to exposure to neutron irradiation, affecting the heat transfer capability and thermal response of a blanket based on these components.« less
Resolution of Forces and Strain Measurements from an Acoustic Ground Test
NASA Technical Reports Server (NTRS)
Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.
2013-01-01
The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.
Study of photon strength functions via (γ→, γ', γ″) reactions at the γ3-setup
NASA Astrophysics Data System (ADS)
Isaak, Johann; Savran, Deniz; Beck, Tobias; Gayer, Udo; Krishichayan; Löher, Bastian; Pietralla, Norbert; Scheck, Marcus; Tornow, Werner; Werner, Volker; Zilges, Andreas
2018-05-01
One of the basic ingredients for the modelling of the nucleosynthesis of heavy elements are so-called photon strength functions and the assumption of the Brink-Axel hypothesis. This hypothesis has been studied for many years by numerous experiments using different and complementary reactions. The present manuscript aims to introduce a model-independent approach to study photon strength functions via γ-γ coincidence spectroscopy of photoexcited states in 128Te. The experimental results provide evidence that the photon strength function extracted from photoabsorption cross sections is not in an overall agreement with the one determined from direct transitions to low-lying excited states.
Making the Case for Reusable Booster Systems: The Operations Perspective
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2012-01-01
Presentation to the Aeronautics Space Engineering Board National Research Council Reusable Booster System: Review and Assessment Committee. Addresses: the criteria and assumptions used in the formulation of current RBS plans; the methodologies used in the current cost estimates for RBS; the modeling methodology used to frame the business case for an RBS capability including: the data used in the analysis, the models' robustness if new data become available, and the impact of unclassified government data that was previously unavailable and which will be supplied by the USAF; the technical maturity of key elements critical to RBS implementation and the ability of current technology development plans to meet technical readiness milestones.
Bending, Zara J
2015-06-01
The conception of the doctor-patient relationship under Australian law has followed British common law tradition whereby the relationship is founded in a contractual exchange. By contrast, this article presents a rationale and framework for an alternative model-a "Trust Model"-for implementation into law to more accurately reflect the contemporary therapeutic dynamic. The framework has four elements: (i) an assumption that professional conflicts (actual or perceived) with patient safety, motivated by financial or personal interests, should be avoided; (ii) an onus on doctors to disclose these conflicts; (iii) a proposed mechanism to contend with instances where doctors choose not to disclose; and (iv) sanctions for non-compliance with the regime.
Modeling of porous concrete elements under load
NASA Astrophysics Data System (ADS)
Demchyna, B. H.; Famuliak, Yu. Ye.; Demchyna, Kh. B.
2017-12-01
It is known that cell concretes are almost immediately destroyed under load, having reached certain critical stresses. Such kind of destruction is called a "catastrophic failure". Process of crack formation is one of the main factors, influencing process of concrete destruction. Modern theory of crack formation is mainly based on the Griffith theory of destruction. However, the mentioned theory does not completely correspond to the structure of cell concrete with its cell structure, because the theory is intended for a solid body. The article presents one of the possible variants of modelling of the structure of cell concrete and gives some assumptions concerning the process of crack formation in such hollow, not solid environment.
Parachute dynamics and stability analysis. [using nonlinear differential equations of motion
NASA Technical Reports Server (NTRS)
Ibrahim, S. K.; Engdahl, R. A.
1974-01-01
The nonlinear differential equations of motion for a general parachute-riser-payload system are developed. The resulting math model is then applied for analyzing the descent dynamics and stability characteristics of both the drogue stabilization phase and the main descent phase of the space shuttle solid rocket booster (SRB) recovery system. The formulation of the problem is characterized by a minimum number of simplifying assumptions and full application of state-of-the-art parachute technology. The parachute suspension lines and the parachute risers can be modeled as elastic elements, and the whole system may be subjected to specified wind and gust profiles in order to assess their effects on the stability of the recovery system.
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
NASA Astrophysics Data System (ADS)
Moore, John A.; Barton, Nathan R.; Florando, Jeff; Mulay, Rupalee; Kumar, Mukul
2017-10-01
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material’s structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V’s mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments or lower fidelity models. The results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.
Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Ma, Q.; Boulet, C.; Tipping, R. H.
2015-01-01
The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS (Energy-Corrected Sudden) and IOS (Infinite-Order Sudden) models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.
Hamanaka, Ryo; Yamaoka, Satoshi; Anh, Tuan Nguyen; Tominaga, Jun-Ya; Koga, Yoshiyuki; Yoshida, Noriaki
2017-11-01
Although many attempts have been made to simulate orthodontic tooth movement using the finite element method, most were limited to analyses of the initial displacement in the periodontal ligament and were insufficient to evaluate the effect of orthodontic appliances on long-term tooth movement. Numeric simulation of long-term tooth movement was performed in some studies; however, neither the play between the brackets and archwire nor the interproximal contact forces were considered. The objectives of this study were to simulate long-term orthodontic tooth movement with the edgewise appliance by incorporating those contact conditions into the finite element model and to determine the force system when the space is closed with sliding mechanics. We constructed a 3-dimensional model of maxillary dentition with 0.022-in brackets and 0.019 × 0.025-in archwire. Forces of 100 cN simulating sliding mechanics were applied. The simulation was accomplished on the assumption that bone remodeling correlates with the initial tooth displacement. This method could successfully represent the changes in the moment-to-force ratio: the tooth movement pattern during space closure. We developed a novel method that could simulate the long-term orthodontic tooth movement and accurately determine the force system in the course of time by incorporating contact boundary conditions into finite element analysis. It was also suggested that friction is progressively increased during space closure in sliding mechanics. Copyright © 2017. Published by Elsevier Inc.
Peridynamic Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy; Bond, Stephen D.; Littlewood, David John
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less
Gaetz, Michael
2017-05-01
CTE has two prominent components: the pathophysiology that is detected in the brain postmortem and the symptomology that is present in the interval between retirement and end of life. CTE symptomology has been noted to include memory difficulties, aggression, depression, explosivity, and executive dysfunction at early stages progressing to problems with attention, mood swings, visuospatial difficulties, confusion, progressive dementia, and suicidality (e.g. McKee et al. (2012), Omalu et al. (2010a-c), McKee et al. (2009)). There are a number of assumptions embedded within the current CTE literature: The first is the assumption that CTE symptomology reported by athletes and their families is the product of the pathophysiology change detected post-mortem (e.g. McKee et al. (2009)). At present, there is little scientific evidence to suggest that all CTE symptomology is the product of CTE pathophysiology. It has been assumed that CTE pathophysiology causes CTE symptomology (Meehan et al. (2015), Iverson et al. (2016)) but this link has never been scientifically validated. The purpose of the present work is to provide a multi-factorial theoretical framework to account for the symptomology reported by some athletes who sustain neurotrauma during their careers that will lead to a more systematic approach to understanding post-career symptomology. There is significant overlap between the case reports of athletes with post-mortem diagnoses of CTE, and symptom profiles of those with a history of substance use, chronic pain, and athlete career transition stress. The athlete post-career adjustment (AP-CA) model is intended to explain some of the symptoms that athletes experience at the end of their careers or during retirement. The AP-CA model consists of four elements: neurotrauma, chronic pain, substance use, and career transition stress. Based on the existing literature, it is clear that any one of the four elements of the AP-CA model can account for a significant number of CTE symptoms. In addition, depression can be a chronic lifelong co-morbid condition that may be present prior to an athletic career, or may be developed secondary to any of the model elements as shown in Fig. 1. Notably, neurotrauma is a necessary, but not a sufficient condition, for the development of CTE symptomology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantifying asymmetry: ratios and alternatives.
Franks, Erin M; Cabo, Luis L
2014-08-01
Traditionally, the study of metric skeletal asymmetry has relied largely on univariate analyses, utilizing ratio transformations when the goal is comparing asymmetries in skeletal elements or populations of dissimilar dimensions. Under this approach, raw asymmetries are divided by a size marker, such as a bilateral average, in an attempt to produce size-free asymmetry indices. Henceforth, this will be referred to as "controlling for size" (see Smith: Curr Anthropol 46 (2005) 249-273). Ratios obtained in this manner often require further transformations to interpret the meaning and sources of asymmetry. This model frequently ignores the fundamental assumption of ratios: the relationship between the variables entered in the ratio must be isometric. Violations of this assumption can obscure existing asymmetries and render spurious results. In this study, we examined the performance of the classic indices in detecting and portraying the asymmetry patterns in four human appendicular bones and explored potential methodological alternatives. Examination of the ratio model revealed that it does not fulfill its intended goals in the bones examined, as the numerator and denominator are independent in all cases. The ratios also introduced strong biases in the comparisons between different elements and variables, generating spurious asymmetry patterns. Multivariate analyses strongly suggest that any transformation to control for overall size or variable range must be conducted before, rather than after, calculating the asymmetries. A combination of exploratory multivariate techniques, such as Principal Components Analysis, and confirmatory linear methods, such as regression and analysis of covariance, appear as a promising and powerful alternative to the use of ratios. © 2014 Wiley Periodicals, Inc.
Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J
2013-01-01
The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy.
A normal tissue dose response model of dynamic repair processes.
Alber, Markus; Belka, Claus
2006-01-07
A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.
Poirier, B; Ville, J M; Maury, C; Kateb, D
2009-09-01
An analytical three dimensional bicylindrical model is developed in order to take into account the effects of the saddle-shaped area for the interface of a n-Herschel-Quincke tube system with the main duct. Results for the scattering matrix of this system deduced from this model are compared, in the plane wave frequency domain, versus experimental and numerical data and a one dimensional model with and without tube length correction. The results are performed with a two-Herschel-Quincke tube configuration having the same diameter as the main duct. In spite of strong assumptions on the acoustic continuity conditions at the interfaces, this model is shown to improve the nonperiodic amplitude variations and the frequency localization of the minima of the transmission and reflection coefficients with respect to one dimensional model with length correction and a three dimensional model.
NASA Astrophysics Data System (ADS)
Sawyer, W.; Resor, P. G.
2016-12-01
Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress and strength drop, as well as slip weakening distance and wall rock stiffness. These studies, however, have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a modified analytical approximation of injection vein formation based on a dike intrusion model we find that the timescales of quenching and flow propagation are similar for a composite set of injection veins compiled from the Asbestos Mountain Fault, USA (Rowe et al., 2012), Gole Larghe Fault Zone, Italy (Griffith et al., 2012) and the Fort Foster Brittle Zone. This indicates a complex, dynamic process whose behavior is not fully captured by the current approach. To assess the applicability of the simplifying assumptions of the dike model when applied to injection veins we employ a finite-element time-dependent model of injection vein formation. This model couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. The final geometry of many injection veins is unaffected by the inclusion of these processes. However, some injection veins are found to be flow limited, with a final geometry reflecting cooling of the vein before it reaches an elastic equilibrium with the wall rock. In these cases, numerical results are significantly different from the dike model, and two basic assumptions of the dike model, self-similar growth and a uniform pressure gradient, are shown to be false. Additionally, we apply the finite-element model to provide two new constraints on the Fort Foster coseismic environment: a lower limit on the initial melt temperature of 1400 *C, and either significant coseismic wall rock softening or high transient tensile stress.
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
NASA Astrophysics Data System (ADS)
Brassard, Pierre; Fontaine, Gilles
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. In the time-dependent approach used in Paper II of this series (Fontaine et al. 2014), the basic assumption is that the accreted metals are trace elements and do not influence the background structure, which may be considered static in time. Furthermore, the usual assumption of instantaneous mixing in the convection zone is made. As part of the continuing development of our local evolutionary code, diffusion in presence of stellar winds or accretion is now fully coupled to evolution. Convection is treated as a diffusion process, i.e., the assumption of instantaneous mixing is relaxed, and, furthermore, overshooting is included. This allows feedback on the evolving structure from the accreting metals. For instance, depending of its abundance, a given metal may contribute enough to the overall opacity (especially in a He background) to change the size of the convection zone as a function of time. Our better approach also allows to include in a natural way the mechanism of thermohaline convection, which we discuss at some length. Also, it is easy to consider sophisticated time-dependent models of accretion from circumstellar disks, such as those developed by Roman Rafikov at Princeton for instance. The current limitations of our approach are 1) the calculations are extremely computer-intensive, and 2) we have not yet developed detailed EOS megatables for metals beyond oxygen.
NASA Astrophysics Data System (ADS)
Kaiser, Olga; Martius, Olivia; Horenko, Illia
2017-04-01
Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
A Systematic Review of Health Economics Simulation Models of Chronic Obstructive Pulmonary Disease.
Zafari, Zafar; Bryan, Stirling; Sin, Don D; Conte, Tania; Khakban, Rahman; Sadatsafavi, Mohsen
2017-01-01
Many decision-analytic models with varying structures have been developed to inform resource allocation in chronic obstructive pulmonary disease (COPD). To review COPD models for their adherence to the best practice modeling recommendations and their assumptions regarding important aspects of the natural history of COPD. A systematic search of English articles reporting on the development or application of a decision-analytic model in COPD was performed in MEDLINE, Embase, and citations within reviewed articles. Studies were summarized and evaluated on the basis of their adherence to the Consolidated Health Economic Evaluation Reporting Standards. They were also evaluated for the underlying assumptions about disease progression, heterogeneity, comorbidity, and treatment effects. Forty-nine models of COPD were included. Decision trees and Markov models were the most popular techniques (43 studies). Quality of reporting and adherence to the guidelines were generally high, especially in more recent publications. Disease progression was modeled through clinical staging in most studies. Although most studies (n = 43) had incorporated some aspects of COPD heterogeneity, only 8 reported the results across subgroups. Only 2 evaluations explicitly considered the impact of comorbidities. Treatment effect had been mostly modeled (20) as both reduction in exacerbation rate and improvement in lung function. Many COPD models have been developed, generally with similar structural elements. COPD is highly heterogeneous, and comorbid conditions play an important role in its burden. These important aspects, however, have not been adequately addressed in most of the published models. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Hindt, Maria; Socha, Amanda L.; Zuber, Hélène
2013-01-01
Here we present approaches for using multi-elemental imaging (specifically synchrotron X-ray fluorescence microscopy, SXRF) in ionomics, with examples using the model plant Arabidopsis thaliana. The complexity of each approach depends on the amount of a priori information available for the gene and/or phenotype being studied. Three approaches are outlined, which apply to experimental situations where a gene of interest has been identified but has an unknown phenotype (Phenotyping), an unidentified gene is associated with a known phenotype (Gene Cloning) and finally, a Screening approach, where both gene and phenotype are unknown. These approaches make use of open-access, online databases with which plant molecular genetics researchers working in the model plant Arabidopsis will be familiar, in particular the Ionomics Hub and online transcriptomic databases such as the Arabidopsis eFP browser. The approaches and examples we describe are based on the assumption that altering the expression of ion transporters can result in changes in elemental distribution. We provide methodological details on using elemental imaging to aid or accelerate gene functional characterization by narrowing down the search for candidate genes to the tissues in which elemental distributions are altered. We use synchrotron X-ray microprobes as a technique of choice, which can now be used to image all parts of an Arabidopsis plant in a hydrated state. We present elemental images of leaves, stem, root, siliques and germinating hypocotyls. PMID:23912758
Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios
Shen, Rui; Suuberg, Eric M.
2016-01-01
There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures. PMID:28090133
Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios.
Shen, Rui; Suuberg, Eric M
2016-02-01
There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures.
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1985-01-01
A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling
Ma, Junqing; Song, Aiguo
2013-01-01
Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144
Generic distortion model for metrology under optical microscopes
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng
2018-04-01
For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.
Equivalence-Equivalence: Matching Stimuli with Same Discriminative Functions
ERIC Educational Resources Information Center
Carpentier, Franck; Smeets, Paul M.; Barnes-Holmes, Dermot
2004-01-01
Previous studies have shown that after being trained on A-B and A-C match-to-sample tasks, adults match not only same-class B and C stimuli (equivalence) but also BC compounds with same-class elements and with different-class elements (BC-BC). The assumption was that the BC-BC performances are based on matching equivalence and nonequivalence…
An axisymmetric single-path model for gas transport in the conducting airways.
Madasu, Srinath; Borhan, All; Ultman, James S
2006-02-01
In conventional one-dimensional single-path models, radially averaged concentration is calculated as a function of time and longitudinal position in the lungs, and coupled convection and diffusion are accounted for with a dispersion coefficient. The axisymmetric single-path model developed in this paper is a two-dimensional model that incorporates convective-diffusion processes in a more fundamental manner by simultaneously solving the Navier-Stokes and continuity equations with the convection-diffusion equation. A single airway path was represented by a series of straight tube segments interconnected by leaky transition regions that provide for flow loss at the airway bifurcations. As a sample application, the model equations were solved by a finite element method to predict the unsteady state dispersion of an inhaled pulse of inert gas along an airway path having dimensions consistent with Weibel's symmetric airway geometry. Assuming steady, incompressible, and laminar flow, a finite element analysis was used to solve for the axisymmetric pressure, velocity and concentration fields. The dispersion calculated from these numerical solutions exhibited good qualitative agreement with the experimental values, but quantitatively was in error by 20%-30% due to the assumption of axial symmetry and the inability of the model to capture the complex recirculatory flows near bifurcations.
NASA Astrophysics Data System (ADS)
Liang, Cheng-Yen
Micromagnetic simulations of magnetoelastic nanostructures traditionally rely on either the Stoner-Wohlfarth model or the Landau-Lifshitz-Gilbert (LLG) model assuming uniform strain (and/or assuming uniform magnetization). While the uniform strain assumption is reasonable when modeling magnetoelastic thin films, this constant strain approach becomes increasingly inaccurate for smaller in-plane nanoscale structures. In this dissertation, a fully-coupled finite element micromagnetic method is developed. The method deals with the micromagnetics, elastodynamics, and piezoelectric effects. The dynamics of magnetization, non-uniform strain distribution, and electric fields are iteratively solved. This more sophisticated modeling technique is critical for guiding the design process of the nanoscale strain-mediated multiferroic elements such as those needed in multiferroic systems. In this dissertation, we will study magnetic property changes (e.g., hysteresis, coercive field, and spin states) due to strain effects in nanostructures. in addition, a multiferroic memory device is studied. The electric-field-driven magnetization switching by applying voltage on patterned electrodes simulation in a nickel memory device is shown in this work. The deterministic control law for the magnetization switching in a nanoring with electric field applied to the patterned electrodes is investigated. Using the patterned electrodes, we show that strain-induced anisotropy is able to be controlled, which changes the magnetization deterministically in a nano-ring.
Theory of Self- vs. Externally-Regulated LearningTM: Fundamentals, Evidence, and Applicability.
de la Fuente-Arias, Jesús
2017-01-01
The Theory of Self- vs. Externally-Regulated Learning TM has integrated the variables of SRL theory, the DEDEPRO model, and the 3P model. This new Theory has proposed: (a) in general, the importance of the cyclical model of individual self-regulation (SR) and of external regulation stemming from the context (ER), as two different and complementary variables, both in combination and in interaction; (b) specifically, in the teaching-learning context, the relevance of different types of combinations between levels of self-regulation (SR) and of external regulation (ER) in the prediction of self-regulated learning (SRL), and of cognitive-emotional achievement. This review analyzes the assumptions, conceptual elements, empirical evidence, benefits and limitations of SRL vs. ERL Theory . Finally, professional fields of application and future lines of research are suggested.
Modelling nonlinearity in piezoceramic transducers: From equations to nonlinear equivalent circuits.
Parenthoine, D; Tran-Huu-Hue, L-P; Haumesser, L; Vander Meulen, F; Lematre, M; Lethiecq, M
2011-02-01
Quadratic nonlinear equations of a piezoelectric element under the assumptions of 1D vibration and weak nonlinearity are derived by the perturbation theory. It is shown that the nonlinear response can be represented by controlled sources that are added to the classical hexapole used to model piezoelectric ultrasonic transducers. As a consequence, equivalent electrical circuits can be used to predict the nonlinear response of a transducer taking into account the acoustic loads on the rear and front faces. A generalisation of nonlinear equivalent electrical circuits to cases including passive layers and propagation media is then proposed. Experimental results, in terms of second harmonic generation, on a coupled resonator are compared to theoretical calculations from the proposed model. Copyright © 2010 Elsevier B.V. All rights reserved.
Bell theorem without inequalities for two spinless particles
NASA Astrophysics Data System (ADS)
Bernstein, Herbert J.; Greenberger, Daniel M.; Horne, Michael A.; Zeilinger, Anton
1993-01-01
We use the Greenberger-Horne-Zeilinger [in Bell's Theorem, Quantum Theory,and Conceptions of the Universe, edited by M. Kafatos (Kluwer Academic, Dordrecht, 1989)] approach to present three demonstrations of the failure of Einstein-Podolsky-Rosen (EPR) [Phys. Rev. 47, 777 (1935)] local realism for the case of two spinless particles in a two-particle interferometer. The original EPR assumptions of locality and reality do not suffice for this. First, we use the EPR assumptions of locality and reality to establish that in a two-particle interferometer, the path taken by each particle is an element of reality. Second, we supplement the EPR premises by the postulate that when the path taken by a particle is an element of reality, all paths not taken are empty. We emphasize that our approach is not applicable to a single-particle interferometer because there the path taken by the particle cannot be established as an element of reality. We point out that there are real conceptual differences between single-particle, two-particle, and multiparticle interferometry.
Modelling consumer intakes of vegetable oils and fats
Tennant, David; Gosling, John Paul
2015-01-01
Vegetable oils and fats make up a significant part of the energy intake in typical European diets. However, their use as ingredients in a diverse range of different foods means that their consumption is often hidden, especially when oils and fats are used for cooking. As a result, there are no reliable estimates of the consumption of different vegetable oils and fats in the diet of European consumers for use in, for example, nutritional assessments or chemical risk assessments. We have developed an innovative model to estimate the consumption of vegetable oils and fats by European Union consumers using the European Union consumption databases and elements of probabilistic modelling. A key feature of the approach is the assessment of uncertainty in the modelling assumptions that can be used to build user confidence and to guide future development. PMID:26160467
Modelling consumer intakes of vegetable oils and fats.
Tennant, David; Gosling, John Paul
2015-01-01
Vegetable oils and fats make up a significant part of the energy intake in typical European diets. However, their use as ingredients in a diverse range of different foods means that their consumption is often hidden, especially when oils and fats are used for cooking. As a result, there are no reliable estimates of the consumption of different vegetable oils and fats in the diet of European consumers for use in, for example, nutritional assessments or chemical risk assessments. We have developed an innovative model to estimate the consumption of vegetable oils and fats by European Union consumers using the European Union consumption databases and elements of probabilistic modelling. A key feature of the approach is the assessment of uncertainty in the modelling assumptions that can be used to build user confidence and to guide future development.
Santillán, Moisés
2003-07-21
A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.
Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S
2006-08-01
The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.
2.5-D frequency-domain viscoelastic wave modelling using finite-element method
NASA Astrophysics Data System (ADS)
Zhao, Jian-guo; Huang, Xing-xing; Liu, Wei-fang; Zhao, Wei-jun; Song, Jian-yong; Xiong, Bin; Wang, Shang-xu
2017-10-01
2-D seismic modelling has notable dynamic information discrepancies with field data because of the implicit line-source assumption, whereas 3-D modelling suffers from a huge computational burden. The 2.5-D approach is able to overcome both of the aforementioned limitations. In general, the earth model is treated as an elastic material, but the real media is viscous. In this study, we develop an accurate and efficient frequency-domain finite-element method (FEM) for modelling 2.5-D viscoelastic wave propagation. To perform the 2.5-D approach, we assume that the 2-D viscoelastic media are based on the Kelvin-Voigt rheological model and a 3-D point source. The viscoelastic wave equation is temporally and spatially Fourier transformed into the frequency-wavenumber domain. Then, we systematically derive the weak form and its spatial discretization of 2.5-D viscoelastic wave equations in the frequency-wavenumber domain through the Galerkin weighted residual method for FEM. Fixing a frequency, the 2-D problem for each wavenumber is solved by FEM. Subsequently, a composite Simpson formula is adopted to estimate the inverse Fourier integration to obtain the 3-D wavefield. We implement the stiffness reduction method (SRM) to suppress artificial boundary reflections. The results show that this absorbing boundary condition is valid and efficient in the frequency-wavenumber domain. Finally, three numerical models, an unbounded homogeneous medium, a half-space layered medium and an undulating topography medium, are established. Numerical results validate the accuracy and stability of 2.5-D solutions and present the adaptability of finite-element method to complicated geographic conditions. The proposed 2.5-D modelling strategy has the potential to address modelling studies on wave propagation in real earth media in an accurate and efficient way.
NASA Astrophysics Data System (ADS)
Narasimhan, Saramati; Weis, Jared A.; Godage, Isuru S.; Webster, Robert; Weaver, Kyle; Miga, Michael I.
2017-03-01
Intracerebral hemorrhages (ICHs) occur in 24 out of 100,000 people annually and have high morbidity and mortality rates. The standard treatment is conservative. We hypothesize that a patient-specific, mechanical model coupled with a robotic steerable needle, used to aspirate a hematoma, would result in a minimally invasive approach to ICH management that will improve outcomes. As a preliminary study, three realizations of a tissue aspiration framework are explored within the context of a biphasic finite element model based on Biot's consolidation theory. Short-term transient effects were neglected in favor of steady state formulation. The Galerkin Method of Weighted Residuals was used to solve coupled partial differential equations using linear basis functions, and assumptions of plane strain and homogeneous isotropic properties. All aspiration models began with the application of aspiration pressure sink(s), calculated pressures and displacements, and the use of von Mises stresses within a tissue failure criterion. With respect to aspiration strategies, one model employs an element-deletion strategy followed by aspiration redeployment on the remaining grid, while the other approaches use principles of superposition on a fixed grid. While the element-deletion approach had some intuitive appeal, without incorporating a dynamic grid strategy, it evolved into a less realistic result. The superposition strategy overcame this, but would require empirical investigations to determine the optimum distribution of aspiration sinks to match material removal. While each modeling framework demonstrated some promise, the superposition method's ease of computation, ability to incorporate the surgical plan, and better similarity to existing empirical observational data, makes it favorable.
Fission product ion exchange between zeolite and a molten salt
NASA Astrophysics Data System (ADS)
Gougar, Mary Lou D.
The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, John A.; Barton, Nathan R.; Florando, Jeff
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material's structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V's mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments ormore » lower fidelity models. Lastly, the results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.« less
Electricity Market Manipulation: How Behavioral Modeling Can Help Market Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Giulia
The question of how to best design electricity markets to integrate variable and uncertain renewable energy resources is becoming increasingly important as more renewable energy is added to electric power systems. Current markets were designed based on a set of assumptions that are not always valid in scenarios of high penetrations of renewables. In a future where renewables might have a larger impact on market mechanisms as well as financial outcomes, there is a need for modeling tools and power system modeling software that can provide policy makers and industry actors with more realistic representations of wholesale markets. One optionmore » includes using agent-based modeling frameworks. This paper discusses how key elements of current and future wholesale power markets can be modeled using an agent-based approach and how this approach may become a useful paradigm that researchers can employ when studying and planning for power systems of the future.« less
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
Moore, John A.; Barton, Nathan R.; Florando, Jeff; ...
2017-08-24
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material's structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V's mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments ormore » lower fidelity models. Lastly, the results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.« less
Brain shift computation using a fully nonlinear biomechanical model.
Wittek, Adam; Kikinis, Ron; Warfield, Simon K; Miller, Karol
2005-01-01
In the present study, fully nonlinear (i.e. accounting for both geometric and material nonlinearities) patient specific finite element brain model was applied to predict deformation field within the brain during the craniotomy-induced brain shift. Deformation of brain surface was used as displacement boundary conditions. Application of the computed deformation field to align (i.e. register) the preoperative images with the intraoperative ones indicated that the model very accurately predicts the displacements of gravity centers of the lateral ventricles and tumor even for very limited information about the brain surface deformation. These results are sufficient to suggest that nonlinear biomechanical models can be regarded as one possible way of complementing medical image processing techniques when conducting nonrigid registration. Important advantage of such models over the linear ones is that they do not require unrealistic assumptions that brain deformations are infinitesimally small and brain tissue stress-strain relationship is linear.
A Critical Examination of the DOD’s Business Management Modernization Program
2005-05-01
Program (BMMP) is a key element of the DoD’s ongoing efforts to transform itself. This paper argues that the BMMP needs to be fundamentally reoriented...communication role it plays in the defense- transformation effort. Introduction The core assumption underlying the DoD’s Business Management... government activities. That this is a core assumption for the BMMP is borne out by the fact that the program’s primary objective is to produce
In vivo bone strain and finite element modeling of a rhesus macaque mandible during mastication.
Panagiotopoulou, Olga; Iriarte-Diaz, José; Wilshin, Simon; Dechow, Paul C; Taylor, Andrea B; Mehari Abraha, Hyab; Aljunid, Sharifah F; Ross, Callum F
2017-10-01
Finite element analysis (FEA) is a commonly used tool in musculoskeletal biomechanics and vertebrate paleontology. The accuracy and precision of finite element models (FEMs) are reliant on accurate data on bone geometry, muscle forces, boundary conditions and tissue material properties. Simplified modeling assumptions, due to lack of in vivo experimental data on material properties and muscle activation patterns, may introduce analytical errors in analyses where quantitative accuracy is critical for obtaining rigorous results. A subject-specific FEM of a rhesus macaque mandible was constructed, loaded and validated using in vivo data from the same animal. In developing the model, we assessed the impact on model behavior of variation in (i) material properties of the mandibular trabecular bone tissue and teeth; (ii) constraints at the temporomandibular joint and bite point; and (iii) the timing of the muscle activity used to estimate the external forces acting on the model. The best match between the FEA simulation and the in vivo experimental data resulted from modeling the trabecular tissue with an isotropic and homogeneous Young's modulus and Poisson's value of 10GPa and 0.3, respectively; constraining translations along X,Y, Z axes in the chewing (left) side temporomandibular joint, the premolars and the m 1 ; constraining the balancing (right) side temporomandibular joint in the anterior-posterior and superior-inferior axes, and using the muscle force estimated at time of maximum strain magnitude in the lower lateral gauge. The relative strain magnitudes in this model were similar to those recorded in vivo for all strain locations. More detailed analyses of mandibular strain patterns during the power stroke at different times in the chewing cycle are needed. Copyright © 2017. Published by Elsevier GmbH.
In vivo bone strain and finite element modeling of a rhesus macaque mandible during mastication☆
Panagiotopoulou, Olga; Iriarte-Diaz, José; Wilshin, Simon; Dechow, Paul C.; Taylor, Andrea B.; Abraha, Hyab Mehari; Aljunid, Sharifah F.; Ross, Callum F.
2018-01-01
Finite element analysis (FEA) is a commonly used tool in musculoskeletal biomechanics and vertebrate paleontology. The accuracy and precision of finite element models (FEMs) are reliant on accurate data on bone geometry, muscle forces, boundary conditions and tissue material properties. Simplified modeling assumptions, due to lack of in vivo experimental data on material properties and muscle activation patterns, may introduce analytical errors in analyses where quantitative accuracy is critical for obtaining rigorous results. A subject-specific FEM of a rhesus macaque mandible was constructed, loaded and validated using in vivo data from the same animal. In developing the model, we assessed the impact on model behavior of variation in (i) material properties of the mandibular trabecular bone tissue and teeth; (ii) constraints at the temporomandibular joint and bite point; and (iii) the timing of the muscle activity used to estimate the external forces acting on the model. The best match between the FEA simulation and the in vivo experimental data resulted from modeling the trabecular tissue with an isotropic and homogeneous Young’s modulus and Poisson’s value of 10 GPa and 0.3, respectively; constraining translations along X,Y, Z axes in the chewing (left) side temporomandibular joint, the premolars and the m1; constraining the balancing (right) side temporomandibular joint in the anterior-posterior and superior-inferior axes, and using the muscle force estimated at time of maximum strain magnitude in the lower lateral gauge. The relative strain magnitudes in this model were similar to those recorded in vivo for all strain locations. More detailed analyses of mandibular strain patterns during the power stroke at different times in the chewing cycle are needed. PMID:29037463
NASA Technical Reports Server (NTRS)
Carlson, F. M.; Chin, L.-Y.; Fripp, A. L.; Crouch, R. K.
1982-01-01
The effect of solid-liquid interface shape on lateral solute segregation during steady-state unidirectional solidification of a binary mixture is calculated under the assumption of no convection in the liquid. A finite element technique is employed to compute the concentration field in the liquid and the lateral segregation in the solid with a curved boundary between the liquid and solid phases. The computational model is constructed assuming knowledge of the solid-liquid interface shape; no attempt is made to relate this shape to the thermal field. The influence of interface curvature on the lateral compositional variation is investigated over a range of system parameters including diffusivity, growth speed, distribution coefficient, and geometric factors of the system. In the limiting case of a slightly nonplanar interface, numerical results from the finite element technique are in good agreement with the analytical solutions of Coriell and Sekerka obtained by using linear theory. For the general case of highly non-planar interface shapes, the linear theory fails and the concentration field in the liquid as well as the lateral solute segregation in the solid can be calculated by using the finite element method.
Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-01-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548
Mathematical Modeling: Are Prior Experiences Important?
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…
Assumptions to the Annual Energy Outlook
2017-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.
A comparison of experimental and calculated thin-shell leading-edge buckling due to thermal stresses
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1988-01-01
High-temperature thin-shell leading-edge buckling test data are analyzed using NASA structural analysis (NASTRAN) as a finite element tool for predicting thermal buckling characteristics. Buckling points are predicted for several combinations of edge boundary conditions. The problem of relating the appropriate plate area to the edge stress distribution and the stress gradient is addressed in terms of analysis assumptions. Local plasticity was found to occur on the specimen analyzed, and this tended to simplify the basic problem since it effectively equalized the stress gradient from loaded edge to loaded edge. The initial loading was found to be difficult to select for the buckling analysis because of the transient nature of thermal stress. Multiple initial model loadings are likely required for complicated thermal stress time histories before a pertinent finite element buckling analysis can be achieved. The basic mode shapes determined from experimentation were correctly identified from computation.
A constitutive model and numerical simulation of sintering processes at macroscopic level
NASA Astrophysics Data System (ADS)
Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy
2018-01-01
This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.
TESS Lens-Bezel Assembly Modal Testing
NASA Technical Reports Server (NTRS)
Dilworth, Brandon J.; Karlicek, Alexandra
2017-01-01
The Transiting Exoplanet Survey Satellite (TESS) program, led by the Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology (MIT) will be the first-ever spaceborne all-sky transit survey. MIT Lincoln Laboratory is responsible for the cameras, including the lens assemblies, detector assemblies, lens hoods, and camera mounts. TESS is scheduled to be launched in August of 2017 with the primary goal to detect small planets with bright host starts in the solar neighborhood, so that detailed characterizations of the planets and their atmospheres can be performed. The TESS payload consists of four identical cameras and a data handling unit. Each camera consists of a lens assembly with seven optical elements and a detector assembly with four charge-coupled devices (CCDs) including their associated electronics. The optical prescription requires that several of the lenses are in close proximity to a neighboring element. A finite element model (FEM) was developed to estimate the relative deflections between each lens-bezel assembly under launch loads to predict that there are adequate clearances preventing the lenses from making contact. Modal tests using non-contact response measurements were conducted to experimentally estimate the modal parameters of the lens-bezel assembly, and used to validate the initial FEM assumptions. Key Words Non-contact measurements, modal analysis, model validation
Deflection Analysis of the Space Shuttle External Tank Door Drive Mechanism
NASA Technical Reports Server (NTRS)
Tosto, Michael A.; Trieu, Bo C.; Evernden, Brent A.; Hope, Drew J.; Wong, Kenneth A.; Lindberg, Robert E.
2008-01-01
Upon observing an abnormal closure of the Space Shuttle s External Tank Doors (ETD), a dynamic model was created in MSC/ADAMS to conduct deflection analyses of the Door Drive Mechanism (DDM). For a similar analysis, the traditional approach would be to construct a full finite element model of the mechanism. The purpose of this paper is to describe an alternative approach that models the flexibility of the DDM using a lumped parameter approximation to capture the compliance of individual parts within the drive linkage. This approach allows for rapid construction of a dynamic model in a time-critical setting, while still retaining the appropriate equivalent stiffness of each linkage component. As a validation of these equivalent stiffnesses, finite element analysis (FEA) was used to iteratively update the model towards convergence. Following this analysis, deflections recovered from the dynamic model can be used to calculate stress and classify each component s deformation as either elastic or plastic. Based on the modeling assumptions used in this analysis and the maximum input forcing condition, two components in the DDM show a factor of safety less than or equal to 0.5. However, to accurately evaluate the induced stresses, additional mechanism rigging information would be necessary to characterize the input forcing conditions. This information would also allow for the classification of stresses as either elastic or plastic.
Simulation of VLF chorus emissions in the magnetosphere and comparison with THEMIS spacecraft data
NASA Astrophysics Data System (ADS)
Demekhov, A. G.; Taubenschuss, U.; Santolík, O.
2017-01-01
We present results of numerical simulations of VLF chorus emissions based on the backward wave oscillator model and compare them with Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft data from the equatorial chorus source region on the early morning side at a radial distance of 6 Earth radii. Specific attention is paid to the choice of simulation parameters based on experimental data. We show that with known parameters of the geomagnetic field, plasma density, and the initial wave frequency, one can successfully reproduce individual chorus elements in the simulation. In particular, the measured growth rate, wave amplitude, and frequency drift rate are in agreement with observed values. The characteristic interval between the elements has a mismatch of factor 2. The agreement becomes perfect if we assume that the inhomogeneity scale of the magnetic field along the field line is half of that obtained from the T96 model. Such an assumption can be justified since the T96 model does not fit well for the time of chorus observations, and there is a shear in the observed field which indicates the presence of local currents.
Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2014-01-01
Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology
NASA Astrophysics Data System (ADS)
Virella, Juan C.; Prato, Carlos A.; Godoy, Luis A.
2008-05-01
The influence of nonlinear wave theory on the sloshing natural periods and their modal pressure distributions are investigated for rectangular tanks under the assumption of two-dimensional behavior. Natural periods and mode shapes are computed and compared for both linear wave theory (LWT) and nonlinear wave theory (NLWT) models, using the finite element package ABAQUS. Linear wave theory is implemented in an acoustic model, whereas a plane strain problem with large displacements is used in NLWT. Pressure distributions acting on the tank walls are obtained for the first three sloshing modes using both linear and nonlinear wave theory. It is found that the nonlinearity does not have significant effects on the natural sloshing periods. For the sloshing pressures on the tank walls, different distributions were found using linear and nonlinear wave theory models. However, in all cases studied, the linear wave theory conservatively estimated the magnitude of the pressure distribution, whereas larger pressures resultant heights were obtained when using the nonlinear theory. It is concluded that the nonlinearity of the surface wave does not have major effects in the pressure distribution on the walls for rectangular tanks.
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
Model Considerations for Memory-based Automatic Music Transcription
NASA Astrophysics Data System (ADS)
Albrecht, Štěpán; Šmídl, Václav
2009-12-01
The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.
Structural Analysis of the Redesigned Ice/Frost Ramp Bracket
NASA Technical Reports Server (NTRS)
Phillips, D. R.; Dawicke, D. S.; Gentz, S. J.; Roberts, P. W.; Raju, I. S.
2007-01-01
This paper describes the interim structural analysis of a redesigned Ice/Frost Ramp bracket for the Space Shuttle External Tank (ET). The proposed redesigned bracket consists of mounts for attachment to the ET wall, supports for the electronic/instrument cables and propellant repressurization lines that run along the ET, an upper plate, a lower plate, and complex bolted connections. The eight nominal bolted connections are considered critical in the summarized structural analysis. Each bolted connection contains a bolt, a nut, four washers, and a non-metallic spacer and block that are designed for thermal insulation. A three-dimensional (3D) finite element model of the bracket is developed using solid 10-node tetrahedral elements. The loading provided by the ET Project is used in the analysis. Because of the complexities associated with accurately modeling the bolted connections in the bracket, the analysis is performed using a global/local analysis procedure. The finite element analysis of the bracket identifies one of the eight bolted connections as having high stress concentrations. A local area of the bracket surrounding this bolted connection is extracted from the global model and used as a local model. Within the local model, the various components of the bolted connection are refined, and contact is introduced along the appropriate interfaces determined by the analysts. The deformations from the global model are applied as boundary conditions to the local model. The results from the global/local analysis show that while the stresses in the bolts are well within yield, the spacers fail due to compression. The primary objective of the interim structural analysis is to show concept viability for static thermal testing. The proposed design concept would undergo continued design optimization to address the identified analytical assumptions and concept shortcomings, assuming successful thermal testing.
Urdapilleta, E; Bellotti, M; Bonetto, F J
2006-10-01
In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.
Development of state and transition model assumptions used in National Forest Plan revision
Eric B. Henderson
2008-01-01
State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regardingmore » their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.« less
NASA Astrophysics Data System (ADS)
Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus
2015-07-01
In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.
Three-dimensional electrical impedance tomography based on the complete electrode model.
Vauhkonen, P J; Vauhkonen, M; Savolainen, T; Kaipio, J P
1999-09-01
In electrical impedance tomography an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. It is often assumed that the injected currents are confined to the two-dimensional (2-D) electrode plane and the reconstruction is based on 2-D assumptions. However, the currents spread out in three dimensions and, therefore, off-plane structures have significant effect on the reconstructed images. In this paper we propose a finite element-based method for the reconstruction of three-dimensional resistivity distributions. The proposed method is based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances. Both the forward and the inverse problems are discussed and results from static and dynamic (difference) reconstructions with real measurement data are given. It is shown that in phantom experiments with accurate finite element computations it is possible to obtain static images that are comparable with difference images that are reconstructed from the same object with the empty (saline filled) tank as a reference.
Rotational Stiffness of Precast Beam-Column Connection using Finite Element Method
NASA Astrophysics Data System (ADS)
Hashim, N.; Agarwal, J.
2018-04-01
Current design practice in structural analysis is to assume the connection as pinned or rigid, however this cannot be relied upon for safety against collapse because during services the actual connection reacts differently where the connection has rotated in relevance. This situation may lead to different reactions and consequently affect design results and other frame responses. In precast concrete structures, connections play an important part in ensuring the safety of the whole structure. Thus, investigates on the actual connection behavior by construct the moment-rotation relationship is significant. Finite element (FE) method is chosen for modeling a 3-dimensional beam-column connection. The model is built in symmetry to reduce analysis time. Results demonstrate that precast billet connection is categorized as semi-rigid connection with Sini of 23,138kNm/rad. This is definitely different from the assumption of pinned or rigid connection used in design practice. Validation were made by comparing with mathematical equation and small differences were achieved that led to the conclusion where precast billet connection using FE method is acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonasson, O.; Karimi, F.; Knezevic, I.
2016-08-01
We derive a Markovian master equation for the single-electron density matrix, applicable to quantum cascade lasers (QCLs). The equation conserves the positivity of the density matrix, includes off-diagonal elements (coherences) as well as in-plane dynamics, and accounts for electron scattering with phonons and impurities. We use the model to simulate a terahertz-frequency QCL, and compare the results with both experiment and simulation via nonequilibrium Green's functions (NEGF). We obtain very good agreement with both experiment and NEGF when the QCL is biased for optimal lasing. For the considered device, we show that the magnitude of coherences can be a significantmore » fraction of the diagonal matrix elements, which demonstrates their importance when describing THz QCLs. We show that the in-plane energy distribution can deviate far from a heated Maxwellian distribution, which suggests that the assumption of thermalized subbands in simplified density-matrix models is inadequate. As a result, we also show that the current density and subband occupations relax towards their steady-state values on very different time scales.« less
Theory of Self- vs. Externally-Regulated LearningTM: Fundamentals, Evidence, and Applicability
de la Fuente-Arias, Jesús
2017-01-01
The Theory of Self- vs. Externally-Regulated LearningTM has integrated the variables of SRL theory, the DEDEPRO model, and the 3P model. This new Theory has proposed: (a) in general, the importance of the cyclical model of individual self-regulation (SR) and of external regulation stemming from the context (ER), as two different and complementary variables, both in combination and in interaction; (b) specifically, in the teaching-learning context, the relevance of different types of combinations between levels of self-regulation (SR) and of external regulation (ER) in the prediction of self-regulated learning (SRL), and of cognitive-emotional achievement. This review analyzes the assumptions, conceptual elements, empirical evidence, benefits and limitations of SRL vs. ERL Theory. Finally, professional fields of application and future lines of research are suggested. PMID:29033872
Two-Layer Viscous Shallow-Water Equations and Conservation Laws
NASA Astrophysics Data System (ADS)
Kanayama, Hiroshi; Dan, Hiroshi
In our previous papers, the two-layer viscous shallow-water equations were derived from the three-dimensional Navier-Stokes equations under the hydrostatic assumption. Also, it was noted that the combination of upper and lower equations in the two-layer model produces the classical one-layer equations if the density of each layer is the same. Then, the two-layer equations were approximated by a finite element method which followed our numerical scheme established for the one-layer model in 1978. Also, it was numerically demonstrated that the interfacial instability generated when the densities are the same can be eliminated by providing a sufficient density difference. In this paper, we newly show that conservation laws are still valid in the two-layer model. Also, we show results of a new physical experiment for the interfacial instability.
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.
Assumptions to the annual energy outlook 1999 : with projections to 2020
DOT National Transportation Integrated Search
1998-12-16
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...
Assumptions to the annual energy outlook 2000 : with projections to 2020
DOT National Transportation Integrated Search
2000-01-01
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...
Assumptions to the annual energy outlook 2001 : with projections to 2020
DOT National Transportation Integrated Search
2000-12-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...
Assumptions for the annual energy outlook 2003 : with projections to 2025
DOT National Transportation Integrated Search
2003-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
Brooks, Benjamin
2008-01-01
Small to Medium Sized Enterprises (SMEs) form the majority of Australian businesses. This study uses ethnographic research methods to describe the organizational culture of a small furniture-manufacturing business in southern Australia. Results show a range of cultural assumptions variously 'embedded' within the enterprise. In line with memetics - Richard Dawkin's cultural application of Charles Darwin's theory of Evolution by Natural Selection, the author suggests that these assumptions compete to be replicated and retained within the organization. The author suggests that dominant assumptions are naturally selected, and that the selection can be better understood by considering the cultural assumptions in reference to Darwin's original principles and Frederik Barth's anthropological framework of knowledge. The results are discussed with reference to safety systems, negative cultural elements called Cultural Safety Viruses, and how our understanding of this particular organizational culture might be used to build resistance to these viruses.
Simple approach to sediment provenance tracing using element analysis and fundamental principles
NASA Astrophysics Data System (ADS)
Matys Grygar, Tomas; Elznicova, Jitka; Popelka, Jan
2016-04-01
Common sediment fingerprinting techniques use either (1) extensive analytical datasets, sometimes nearly complete with respect to accessible characterization techniques; they are processed by multidimensional statistics based on certain statistical assumptions on distribution functions of analytical results and conservativeness/additivity of some components, or (2) analytically demanding characteristics such as isotope ratios assumed to be unequivocal "labels" on the parent material unaltered by any catchment process. The inherent problem of the approach ad (1) is that interpretation of statistical components ("sources") is done ex post and remains purely formal. The problem of the approach ad (2) is that catchment processes (weathering, transport, deposition) can modify most geochemical parameters of soils and sediments, in other words, that the idea that some geochemistry parameters are "conservative" may be idealistic. Grain-size effects and sediment provenance have a joint influence on chemical composition of fluvial sediments that is indeed not easy to distinguish. Attempts to separate those two main components using only statistics seem risky and equivocal, because grain-size dependence of element composition is nearly individual for each element and reflects sediment maturity and catchment-specific formation transport processes. We suppose that the use of less extensive datasets of analytical results and their interpretation respecting fundamental principles should be more robust than only statistic tools applied to overwhelming datasets. We examined sediment composition, both published by other researchers and gathered by us, and we found some general principles, which are in our opinion relevant for fingerprinting: (1) Concentrations of all elements are grain-size sensitive, i.e. there are no "conservative" elements in conventional sense of provenance- or transport-pathways tracing, (2) fractionation by catchment processes and fluvial transport changes slightly but systematically element ratios in solids, (3) the geochemistry and fates of the finest particles, neoformed by weathering and reactive during transport and storage in fluvial system, are different than those of the parent material and its less mature coarse weathering products, and (4) most inter-element ratios and some grain-size effects are non-linear that endanger assumption on additivity of properties in components mixing. We are aware we offer only a conceptual model and not a novel algorithm for quantification of sediment sources, which could be tested in practical studies. On the other hand, we consider element fractionation by exogenic processes fascinating as they are poorly described but relevant not only for provenance tracing but also for general environmental geochemistry.
NASA Astrophysics Data System (ADS)
Ots, Riinu; Heal, Mathew R.; Young, Dominique E.; Williams, Leah R.; Allan, James D.; Nemitz, Eiko; Di Marco, Chiara; Detournay, Anais; Xu, Lu; Ng, Nga L.; Coe, Hugh; Herndon, Scott C.; Mackenzie, Ian A.; Green, David C.; Kuenen, Jeroen J. P.; Reis, Stefan; Vieno, Massimo
2018-04-01
Evidence is accumulating that emissions of primary particulate matter (PM) from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal) burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012), as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source). The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA) component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist - all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC) concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than inventory estimates and that the spatial distribution of residential solid fuel burning emissions, particularly in smoke control areas, needs re-evaluation. The model results also suggest the assumed temporal profiles for residential emissions may require review to place greater emphasis on evening (including discretionary
) solid fuel burning.
NASA Astrophysics Data System (ADS)
Carrera, E.; Miglioretti, F.; Petrolo, M.
2011-11-01
This paper compares and evaluates various plate finite elements to analyse the static response of thick and thin plates subjected to different loading and boundary conditions. Plate elements are based on different assumptions for the displacement distribution along the thickness direction. Classical (Kirchhoff and Reissner-Mindlin), refined (Reddy and Kant), and other higher-order displacement fields are implemented up to fourth-order expansion. The Unified Formulation UF by the first author is used to derive finite element matrices in terms of fundamental nuclei which consist of 3×3 arrays. The MITC4 shear-locking free type formulation is used for the FE approximation. Accuracy of a given plate element is established in terms of the error vs. thickness-to-length parameter. A significant number of finite elements for plates are implemented and compared using displacement and stress variables for various plate problems. Reduced models that are able to detect the 3D solution are built and a Best Plate Diagram (BPD) is introduced to give guidelines for the construction of plate theories based on a given accuracy and number of terms. It is concluded that the UF is a valuable tool to establish, for a given plate problem, the most accurate FE able to furnish results within a certain accuracy range. This allows us to obtain guidelines and recommendations in building refined elements in the bending analysis of plates for various geometries, loadings, and boundary conditions.
NASA Technical Reports Server (NTRS)
Barker, Edwin S.; Matney, M. J.; Liou, J.-C.; Abercromby, K. J.; Rodriquez, H. M.; Seitzer, P.
2006-01-01
Since 2002 the National Aeronautics and Space Administration (NASA) has carried out an optical survey of the debris environment in the geosynchronous Earth-orbit (GEO) region with the Michigan Orbital Debris Survey Telescope (MODEST) in Chile. The survey coverage has been similar for 4 of the 5 years allowing us to follow the orbital evolution of Correlated Targets (CTs), both controlled and un-controlled objects, and Un-Correlated Targets (UCTs). Under gravitational perturbations the distributions of uncontrolled objects, both CTs and UCTs, in GEO orbits will evolve in predictable patterns, particularly evident in the inclination and right ascension of the ascending node (RAAN) distributions. There are several clusters (others have used a "cloud" nomenclature) in observed distributions that show evolution from year to year in their inclination and ascending node elements. However, when MODEST is in survey mode (field-of-view approx.1.3deg) it provides only short 5-8 minute orbital arcs which can only be fit under the assumption of a circular orbit approximation (ACO) to determine the orbital parameters. These ACO elements are useful only in a statistical sense as dedicated observing runs would be required to obtain sufficient orbital coverage to determine a set of accurate orbital elements and then to follow their evolution. Identification of the source(s) for these "clusters of UCTs" would be advantageous to the overall definition of the GEO orbital debris environment. This paper will set out to determine if the ACO elements can be used to in a statistical sense to identify the source of the "clustering of UCTs" roughly centered on an inclination of 12deg and a RAAN of 345deg. The breakup of the Titan 3C-4 transtage on February 21, 1992 has been modeled using NASA s LEGEND (LEO-to-GEO Environment Debris) code to generate a GEO debris cloud. Breakup fragments are created based on the NASA Standard Breakup Model (including fragment size, area-to-mass (A/M), and delta-V distributions). Once fragments are created, they are propagated forward in time with a subroutine GEOPROP. Perturbations included in GEOPROP are those due to solar/lunar gravity, radiation pressure, and major geopotential terms. The question to be addressed: are the UCTs detected by MODEST in this inclination/RAAN region related to the Titan 3C-4 breakup? Discussion will include the observational biases in attempting to detect a specific, uncontrolled target during given observing session. These restrictions include: (1) the length of the observing session which is 8 hours or less at any given date or declination; (2) the assumption of ACO elements for detected object when the breakup model predicts debris with non-zero eccentricities; (3) the size and illumination or brightness of the debris predicted by the model and the telescope/sky limiting magnitude.
Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.
1984-02-01
The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation
NASA Astrophysics Data System (ADS)
McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.
2017-12-01
In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017). Novel basin modelling concept for simulating deformation from mechanical compaction using level sets. Computational Geosciences, SI:ECMOR XV, 1-14.[2] Bangerth, et. al. (2011). Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Transactions on Mathematical Software (TOMS), 38(2):14.
NASA Astrophysics Data System (ADS)
Wang, Jing; Qi, Zhaohui; Wang, Gang
2017-10-01
The dynamic analysis of cable-pulley systems is investigated in this paper, where the time-varying length characteristic of the cable as well as the coupling motion between the cable and the pulleys are considered. The dynamic model for cable-pulley systems are presented based on the principle of virtual power. Firstly, the cubic spline interpolation is adopted for modeling the flexible cable elements and the virtual 1powers of tensile strain, inertia and gravity forces on the cable are formulated. Then, the coupled motions between the cable and the movable or fixed pulley are described by the input and output contact points, based on the no-slip assumption and the spatial description. The virtual powers of inertia, gravity and applied forces on the contact segment of the cable, the movable and fixed pulleys are formulated. In particular, the internal node degrees of freedom of spline cable elements are reduced, which results in that only the independent description parameters of the nodes connected to the pulleys are included in the final governing dynamic equations. At last, two cable-pulley lifting mechanisms are considered as demonstrative application examples where the vibration of the lifting process is investigated. The comparison with ADAMS models is given to prove the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Gould, Kevin E.; Satyanarayana, Arunkumar; Bogert, Philip B.
2016-01-01
Analysis performed in this study substantiates the need for high fidelity vehicle level progressive damage analyses (PDA) structural models for use in the verification and validation of proposed sub-scale structural models and to support required full-scale vehicle level testing. PDA results are presented that capture and correlate the responses of sub-scale 3-stringer and 7-stringer panel models and an idealized 8-ft diameter fuselage model, which provides a vehicle level environment for the 7-stringer sub-scale panel model. Two unique skin-stringer attachment assumptions are considered and correlated in the models analyzed: the TIE constraint interface versus the cohesive element (COH3D8) interface. Evaluating different interfaces allows for assessing a range of predicted damage modes, including delamination and crack propagation responses. Damage models considered in this study are the ABAQUS built-in Hashin procedure and the COmplete STress Reduction (COSTR) damage procedure implemented through a VUMAT user subroutine using the ABAQUS/Explicit code.
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Mechanisms of hydrogen-assisted fracture in austenitic stainless steel welds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balch, Dorian K.; Sofronis, Petros; Somerday, Brian P.
2005-03-01
The objective of this study was to quantify the hydrogen-assisted fracture susceptibility of gas-tungsten arc (GTA) welds in the nitrogen-strengthened, austenitic stainless steels 21Cr-6Ni-9Mn (21-6-9) and 22Cr-13Ni-5Mn (22-13-5). In addition, mechanisms of hydrogen-assisted fracture in the welds were identified using electron microscopy and finite-element modeling. Elastic-plastic fracture mechanics experiments were conducted on hydrogen-charged GTA welds at 25 C. Results showed that hydrogen dramatically lowered the fracture toughness from 412 kJ/m{sup 2} to 57 kJ/m{sup 2} in 21-6-9 welds and from 91 kJ/m{sup 2} to 26 kJ/m{sup 2} in 22-13-5 welds. Microscopy results suggested that hydrogen served two roles in themore » fracture of welds: it promoted the nucleation of microcracks along the dendritic structure and accelerated the link-up of microcracks by facilitating localized deformation. A continuum finite-element model was formulated to test the notion that hydrogen could facilitate localized deformation in the ligament between microcracks. On the assumption that hydrogen decreased local flow stress in accordance with the hydrogen-enhanced dislocation mobility argument, the finite-element results showed that deformation was localized in a narrow band between two parallel, overlapping microcracks. In contrast, in the absence of hydrogen, the finite-element results showed that deformation between microcracks was more uniformly distributed.« less
FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nevin, J.P.; Connor, J.A.; Newell, C.J.
1997-12-31
A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less
NASA Astrophysics Data System (ADS)
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
Fundamental studies in X-ray astrophysics
NASA Technical Reports Server (NTRS)
Lamb, D. Q.; Lightman, A. P.
1982-01-01
An analytical model calculation of the ionization structure of matter accreting onto a degenerate dwarf was carried out. Self-consistent values of the various parameters are used. The possibility of nuclear burning of the accreting matter is included. We find the blackbody radiation emitted from the stellar surface keeps hydrogen and helium ionized out to distances much larger than a typical binary separation. Except for low mass stars or high accretion rates, the assumption of complete ionization of the elements heavier than helium is a good first approximation. For low mass stars or high accretion rates the validity of assuming complete ionization depends sensitivity on the distribution of matter in the binary system.
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-03-30
C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.
Modelling the isotopic evolution of the Earth.
Paul, Debajyoti; White, William M; Turcotte, Donald L
2002-11-15
We present a flexible multi-reservoir (primitive lower mantle, depleted upper mantle, upper continental crust, lower continental crust and atmosphere) forward-transport model of the Earth, incorporating the Sm-Nd, Rb-Sr, U-Th-Pb-He and K-Ar isotope-decay systematics. Mathematically, the model consists of a series of differential equations, describing the changing abundance of each nuclide in each reservoir, which are solved repeatedly over the history of the Earth. Fluxes between reservoirs are keyed to heat production and further constrained by estimates of present-day fluxes (e.g. subduction, plume flux) and current sizes of reservoirs. Elemental transport is tied to these fluxes through 'enrichment factors', which allow for fractionation between species. A principal goal of the model is to reproduce the Pb-isotope systematics of the depleted upper mantle, which has not been done in earlier models. At present, the depleted upper mantle has low (238)U/(204)Pb (mu) and (232)Th/(238)U (kappa) ratios, but Pb-isotope ratios reflect high time-integrated values of these ratios. These features are reproduced in the model and are a consequence of preferential subduction of U and of radiogenic Pb from the upper continental crust into the depleted upper mantle. At the same time, the model reproduces the observed Sr-, Nd-, Ar- and He-isotope ratios of the atmosphere, continental crust and mantle. We show that both steady-state and time-variant concentrations of incompatible-element concentrations and ratios in the continental crust and upper mantle are possible. Indeed, in some cases, incompatible-element concentrations and ratios increase with time in the depleted mantle. Hence, assumptions of a progressively depleting or steady-state upper mantle are not justified. A ubiquitous feature of this model, as well as other evolutionary models, is early rapid depletion of the upper mantle in highly incompatible elements; hence, a near-chondritic Th/U ratio in the upper mantle throughout the Archean is unlikely. The model also suggests that the optimal value of the bulk silicate Earth's K/U ratio is close to 10000; lower values suggested recently seem unlikely.
Finite Element Simulations to Explore Assumptions in Kolsky Bar Experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crum, Justin
2015-08-05
The chief purpose of this project has been to develop a set of finite element models that attempt to explore some of the assumptions in the experimental set-up and data reduction of the Kolsky bar experiment. In brief, the Kolsky bar, sometimes referred to as the split Hopkinson pressure bar, is an experimental apparatus used to study the mechanical properties of materials at high strain rates. Kolsky bars can be constructed to conduct experiments in tension or compression, both of which are studied in this paper. The basic operation of the tension Kolsky bar is as follows: compressed air ismore » inserted into the barrel that contains the striker; the striker accelerates towards the left and strikes the left end of the barrel producing a tensile stress wave that propogates first through the barrel and then down the incident bar, into the specimen, and finally the transmission bar. In the compression case, the striker instead travels to the right and impacts the incident bar directly. As the stress wave travels through an interface (e.g., the incident bar to specimen connection), a portion of the pulse is transmitted and the rest reflected. The incident pulse, as well as the transmitted and reflected pulses are picked up by two strain gauges installed on the incident and transmitted bars as shown. By interpreting the data acquired by these strain gauges, the stress/strain behavior of the specimen can be determined.« less
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
Semi-quantitative spectrographic analysis and rank correlation in geochemistry
Flanagan, F.J.
1957-01-01
The rank correlation coefficient, rs, which involves less computation than the product-moment correlation coefficient, r, can be used to indicate the degree of relationship between two elements. The method is applicable in situations where the assumptions underlying normal distribution correlation theory may not be satisfied. Semi-quantitative spectrographic analyses which are reported as grouped or partly ranked data can be used to calculate rank correlations between elements. ?? 1957.
Dynamic analysis of the American Maglev system. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seda-Sanabria, Y.; Ray, J.C.
1996-06-01
Understanding the dynamic interaction between a magnetic levitated (Maglev) vehicle and its supporting guideway is essential in the evaluation of the performance of such a system. This interacting coupling, known as vehicle/guideway interaction (VGI), has a significant effect on system parameters such as the required magnetic suspension forces and gaps, vehicular ride quality, and guideway deflections and stresses. This report presents the VGI analyses conducted on an actual Maglev system concept definition (SCD), the American Maglev SCD, using a linear-elastic finite-element (FE) model. Particular interest was focused on the comparison of the ride quality of the vehicle, using two differentmore » suspension systems, and their effect on the guideway structure. The procedure and necessary assumptions in the modeling are discussed.« less
Effects of Earth's curvature in full-wave modeling of VLF propagation
NASA Astrophysics Data System (ADS)
Qiu, L.; Lehtinen, N. G.; Inan, U. S.; Stanford VLF Group
2011-12-01
We show how to include curvature in the full-wave finite element approach to calculate ELF/VLF wave propagation in horizontally stratified earth-ionosphere waveguide. A general curvilinear stratified system is considered, and the numerical solutions of full-wave method in curvilinear system are compared with the analytic solutions in the cylindrical and spherical waveguides filled with an isotropic medium. We calculate the attenuation and height gain for modes in the Earth-ionosphere waveguide, taking into account the anisotropicity of ionospheric plasma, for different assumptions about the Earth's curvature, and quantify the corrections due to the curvature. The results are compared with the results of previous models, such as LWPC, as well as with ground and satellite observations, and show improved accuracy compared with full-wave method without including the curvature effect.
LES of cavitating flow inside a Diesel injector including dynamic needle movement
NASA Astrophysics Data System (ADS)
Örley, F.; Hickel, S.; Schmidt, S. J.; Adams, N. A.
2015-12-01
We perform large-eddy simulations (LES) of the turbulent, cavitating flow inside a 9-hole solenoid common-rail injector including jet injection into gas during a full injection cycle. The liquid fuel, vapor, and gas phases are modelled by a homogeneous mixture approach. The cavitation model is based on a thermodynamic equilibrium assumption. The geometry of the injector is represented on a Cartesian grid by a conservative cut-element immersed boundary method. The strategy allows for the simulation of complex, moving geometries with sub-cell resolution. We evaluate the effects of needle movement on the cavitation characteristics in the needle seat and tip region during opening and closing of the injector. Moreover, we study the effect of cavitation inside the injector nozzles on primary jet break-up.
Nonlinear Shell Modeling of Thin Membranes with Emphasis on Structural Wrinkling
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Sleight, David W.; Wang, John T.
2003-01-01
Thin solar sail membranes of very large span are being envisioned for near-term space missions. One major design issue that is inherent to these very flexible structures is the formation of wrinkling patterns. Structural wrinkles may deteriorate a solar sail's performance and, in certain cases, structural integrity. In this paper, a geometrically nonlinear, updated Lagrangian shell formulation is employed using the ABAQUS finite element code to simulate the formation of wrinkled deformations in thin-film membranes. The restrictive assumptions of true membranes, i.e. Tension Field theory (TF), are not invoked. Two effective modeling strategies are introduced to facilitate convergent solutions of wrinkled equilibrium states. Several numerical studies are carried out, and the results are compared with recent experimental data. Good agreement is observed between the numerical simulations and experimental data.
ERIC Educational Resources Information Center
Berenson, Mark L.
2013-01-01
There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…
Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir
2018-01-01
Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
Trace elements in ocean ridge basalts
NASA Technical Reports Server (NTRS)
Kay, R. W.; Hubbard, N. J.
1978-01-01
A study is made of the trace elements found in ocean ridge basalts. General assumptions regarding melting behavior, trace element fractionation, and alteration effects are presented. Data on the trace elements are grouped according to refractory lithophile elements, refractory siderophile elements, and volatile metals. Variations in ocean ridge basalt chemistry are noted both for regional and temporal characteristics. Ocean ridge basalts are compared to other terrestrial basalts, such as those having La/Yb ratios greater than those of chondrites, and those having La/Yb ratios less than those of chondrites. It is found that (1) as compared to solar or chondrite ratios, ocean ridge basalts have low ratios of large, highly-charged elements to smaller less highly-charged elements, (2) ocean ridge basalts exhibit low ratios of volatile to nonvolatile elements, and (3) the transition metals Cr through Zn in ocean ridge basalts are not fractionated more than a factor of 2 or 3 from the chondritic abundance ratios.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
2012-01-01
Background The Danish Multiple Sclerosis Society initiated a large-scale bridge building and integrative treatment project to take place from 2004–2010 at a specialized Multiple Sclerosis (MS) hospital. In this project, a team of five conventional health care practitioners and five alternative practitioners was set up to work together in developing and offering individualized treatments to 200 people with MS. The purpose of this paper is to present results from the six year treatment collaboration process regarding the development of an integrative treatment model. Discussion The collaborative work towards an integrative treatment model for people with MS, involved six steps: 1) Working with an initial model 2) Unfolding the different treatment philosophies 3) Discussing the elements of the Intervention-Mechanism-Context-Outcome-scheme (the IMCO-scheme) 4) Phrasing the common assumptions for an integrative MS program theory 5) Developing the integrative MS program theory 6) Building the integrative MS treatment model. The model includes important elements of the different treatment philosophies represented in the team and thereby describes a common understanding of the complexity of the courses of treatment. Summary An integrative team of practitioners has developed an integrative model for combined treatments of People with Multiple Sclerosis. The model unites different treatment philosophies and focuses on process-oriented factors and the strengthening of the patients’ resources and competences on a physical, an emotional and a cognitive level. PMID:22524586
Extending quantum mechanics entails extending special relativity
NASA Astrophysics Data System (ADS)
Aravinda, S.; Srikanth, R.
2016-05-01
The complementarity between signaling and randomness in any communicated resource that can simulate singlet statistics is generalized by relaxing the assumption of free will in the choice of measurement settings. We show how to construct an ontological extension for quantum mechanics (QMs) through the oblivious embedding of a sound simulation protocol in a Newtonian spacetime. Minkowski or other intermediate spacetimes are ruled out as the locus of the embedding by virtue of hidden influence inequalities. The complementarity transferred from a simulation to the extension unifies a number of results about quantum non-locality, and implies that special relativity has a different significance for the ontological model and for the operational theory it reproduces. Only the latter, being experimentally accessible, is required to be Lorentz covariant. There may be certain Lorentz non-covariant elements at the ontological level, but they will be inaccessible at the operational level in a valid extension. Certain arguments against the extendability of QM, due to Conway and Kochen (2009) and Colbeck and Renner (2012), are attributed to their assumption that the spacetime at the ontological level has Minkowski causal structure.
NASA Astrophysics Data System (ADS)
Medlyn, B.; Jiang, M.; Zaehle, S.
2017-12-01
There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.
A 3/D finite element approach for metal matrix composites based on micromechanical models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svobodnik, A.J.; Boehm, H.J.; Rammerstorfer, F.G.
Based on analytical considerations by Dvorak and Bahel-El-Din, a 3/D finite element material law has been developed for the elastic-plastic analysis of unidirectional fiber-reinforced metal matrix composites. The material law described in this paper has been implemented in the finite element code ABAQUS via the user subroutine UMAT. A constitutive law is described under the assumption that the fibers are linear-elastic and the matrix is of a von Mises-type with a Prager-Ziegler kinematic hardening rule. The uniaxial effective stress-strain relationship of the matrix in the plastic range is approximated by a Ramberg-Osgood law, a linear hardening rule or a nonhardeningmore » rule. Initial yield surface of the matrix material and for the fiber reinforced composite are compared to show the effect of reinforcement. Implementation of this material law in a finite element program is shown. Furthermore, the efficiency of substepping schemes and stress corrections for the numerical integration of the elastic-plastic stress-strain relations for anisotropic materials are investigated. The results of uniaxial monotonic tests of a boron/aluminum composite are compared to some finite element analyses based on micromechanical considerations. Furthermore a complete 3/D analysis of a tensile test specimen made of a silicon-carbide/aluminum MMC and the analysis of an MMC inlet inserted in a homogenous material are shown. 12 refs.« less
Mantle rare gas relative abundances in a steady-state mass transport model
NASA Technical Reports Server (NTRS)
Porcelli, D.; Wasserburg, G. J.
1994-01-01
A model for He and Xe was presented previously which incorporates mass transfer of rare gases from an undegassed lower mantle (P) and the atmosphere into a degassed upper mantle (D). We extend the model to include Ne and Ar. Model constraints on rare gas relative abundances within P are derived. Discussions of terrestrial volatile acquisition have focused on the rare gas abundance pattern of the atmosphere relative to meteoritic components, and the pattern of rare gases still trapped in the Ear,th is important in identifying volatile capture and loss processes operating during Earth formation. The assumptions and principles of the model are discussed in Wasserburg and Porcelli (this volume). For P, the concentrations in P of the decay/nuclear products 4 He, 21 Ne, 40 Ar, and 136 Xe can be calculated from the concentrations of the parent elements U, Th, K, and Pu. The total concentration of the daughter element in P is proportional to the isotopic shifts in P. For Ar, ((40)Ar/(36)Ar)p - ((40)Ar/(36)Ar)o =Delta (exp 40) p= 40 Cp/(exp 36)C where(i)C(sub j) the concentration of isotope i in j. In D, isotope compositions are the result of mixing rare gases from P, decay/nuclear products generated in the upper mantle, and subducted rare gases (for Ar and Xe).
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Diethelm, Kai; Gray, Hugh R. (Technical Monitor)
2002-01-01
Fraction-order viscoelastic (FOV) material models have been proposed and studied in 1D since the 1930's, and were extended into three dimensions in the 1970's under the assumption of infinitesimal straining. It was not until 1997 that Drozdov introduced the first finite-strain FOV constitutive equations. In our presentation, we shall continue in this tradition by extending the standard, FOV, fluid and solid, material models introduced in 1971 by Caputo and Mainardi into 3D constitutive formula applicable for finite-strain analyses. To achieve this, we generalize both the convected and co-rotational derivatives of tensor fields to fractional order. This is accomplished by defining them first as body tensor fields and then mapping them into space as objective Cartesian tensor fields. Constitutive equations are constructed using both variants for fractional rate, and their responses are contrasted in simple shear. After five years of research and development, we now possess a basic suite of numerical tools necessary to study finite-strain FOV constitutive equations and their iterative refinement into a mature collection of material models. Numerical methods still need to be developed for efficiently solving fraction al-order integrals, derivatives, and differential equations in a finite element setting where such constitutive formulae would need to be solved at each Gauss point in each element of a finite model, which can number into the millions in today's analysis.
Modeling frictional melt injection to constrain coseismic physical conditions
NASA Astrophysics Data System (ADS)
Sawyer, William J.; Resor, Phillip G.
2017-07-01
Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress drop, as well as slip weakening distance and wall rock stiffness. These studies have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a pressurized crack model, an analytical approximation of injection vein formation based on dike intrusion, we find that the timescales of quenching and flow propagation may be similar for a subset of injection veins compiled from the Asbestos Mountain Fault, USA, Gole Larghe Fault Zone, Italy, and the Fort Foster Brittle Zone, USA under minimum melt temperature conditions. 34% of the veins are found to be flow limited, with a final geometry that may reflect cooling of the vein before it reaches an elastic equilibrium with the wall rock. Formation of these veins is a dynamic process whose behavior is not fully captured by the analytical approach. To assess the applicability of simplifying assumptions of the pressurized crack we employ a time-dependent finite-element model of injection vein formation that couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. This finite element model reveals that two basic assumptions of the pressurized crack model, self-similar growth and a uniform pressure gradient, are false. The pressurized crack model thus underestimates flow propagation time by 2-3 orders of magnitude. Flow limiting may therefore occur under a wider range of conditions than previously thought. Flow-limited veins may be recognizable in the field where veins have tapered profiles or smaller aspect ratios than expected. The occurrence and shape of injection veins can be coupled with modeling to provide an independent estimate of minimum melt temperature. Finally, the large aspect ratio observed for all three populations of injection veins may be best explained by a large reduction in stiffness associated with coseismic damage, as injection vein growth is likely to far exceed the lifetime of dynamic stresses at any location along a fault.
NASA Astrophysics Data System (ADS)
Pieczynska-Kozlowska, Joanna
2014-05-01
One of a geotechnical problem in the area of Wroclaw is an anthropogenic embankment layer delaying to the depth of 4-5m, arising as a result of historical incidents. In such a case an assumption of bearing capacity of strip footing might be difficult. The standard solution is to use a deep foundation or foundation soil replacement. However both methods generate significant costs. In the present paper the authors focused their attention on the influence of anthropogenic embankment variability on bearing capacity. Soil parameters were defined on the basis of CPT test and modeled as 2D anisotropic random fields and the assumption of bearing capacity were made according deterministic finite element methods. Many repeated of the different realizations of random fields lead to stable expected value of bearing capacity. The algorithm used to estimate the bearing capacity of strip footing was the random finite element method (e.g. [1]). In traditional approach of bearing capacity the formula proposed by [2] is taken into account. qf = c'Nc + qNq + 0.5γBN- γ (1) where: qf is the ultimate bearing stress, cis the cohesion, qis the overburden load due to foundation embedment, γ is the soil unit weight, Bis the footing width, and Nc, Nq and Nγ are the bearing capacity factors. The method of evaluation the bearing capacity of strip footing based on finite element method incorporate five parameters: Young's modulus (E), Poisson's ratio (ν), dilation angle (ψ), cohesion (c), and friction angle (φ). In the present study E, ν and ψ are held constant while c and φ are randomized. Although the Young's modulus does not affect the bearing capacity it governs the initial elastic response of the soil. Plastic stress redistribution is accomplished using a viscoplastic algorithm merge with an elastic perfectly plastic (Mohr - Coulomb) failure criterion. In this paper a typical finite element mesh was assumed with 8-node elements consist in 50 columns and 20 rows. Footings width B occupies 10 elements, 0.1 x 0.1 meter size. The footings are placed at the center of the mesh. Figure 1 shows the mesh used in probabilistic bearing capacity analysis. PIC Figure 1- Mesh used in analyses REFERENCES Fenton, G.A., Griffiths, D.V., (2008) Risk Assessment in Geotechnical Engineering, John Wiley & Sons, New York, Terzaghi, K. (1943). Theoretical Soil Mechanics, New York: John Wiley & Sons.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Breen, Barbara J; Donovan, Graham M; Sneyd, James; Tawhai, Merryn H
2012-08-15
Airway hyper-responsiveness (AHR), a hallmark of asthma, is a highly complex phenomenon characterised by multiple processes manifesting over a large range of length and time scales. Multiscale computational models have been derived to embody the experimental understanding of AHR. While current models differ in their derivation, a common assumption is that the increase in parenchymal tethering pressure P(teth) during airway constriction can be described using the model proposed by Lai-Fook (1979), which is based on intact lung experimental data for elastic moduli over a range of inflation pressures. Here we reexamine this relationship for consistency with a nonlinear elastic material law that has been parameterised to the pressure-volume behaviour of the intact lung. We show that the nonlinear law and Lai-Fook's relationship are consistent for small constrictions, but diverge when the constriction becomes large. Copyright © 2012 Elsevier B.V. All rights reserved.
A Deep Learning Approach to LIBS Spectroscopy for Planetary Applications
NASA Astrophysics Data System (ADS)
Mullen, T. H.; Parente, M.; Gemp, I.; Dyar, M. D.
2017-12-01
The ChemCam instrument on the Curiousity rover has collected >440,000 laser-induced breakdown spectra (LIBS) from 1500 different geological targets since 2012. The team is using a pipeline of preprocessing and partial least squares techniques to predict compositions of surface materials [1]. Unfortunately, such multivariate techniques are plagued by hard-to-meet assumptions involving constant hyperparameter tuning to specific elements and the amount of training data available; if the whole distribution of data is not seen, the method will overfit to the training data and generalizability will suffer. The rover only has 10 calibration targets on-board that represent a small subset of the geochemical samples the rover is expected to investigate. Deep neural networks have been used to bypass these issues in other fields. Semi-supervised techniques allow researchers to utilized small labeled datasets and vast amounts of unlabeled data. One example is the variational autoencoder model, a semi-supervised generative model in the form of a deep neural network. The autoencoder assumes that LIBS spectra are generated from a distribution conditioned on the elemental compositions in the sample and some nuisance. The system is broken into two models: one that predicts elemental composition from the spectra and one that generates spectra from compositions that may or may not be seen in the training set. The synthesized spectra show strong agreement with geochemical conventions to express specific compositions. The predictions of composition show improved generalizability to PLS. Deep neural networks have also been used to transfer knowledge from one dataset to another to solve unlabeled data problems. Given that vast amounts of laboratry LIBS spectra have been obtained in the past few years, it is now feasible train a deep net to predict elemental composition from lab spectra. Transfer learning (manifold alignment or calibration transfer) [2] is then used to fine-tune the model from terrestrial lab data to Martian field data. Neural networks and generative models provide the flexibility need for elemental composition prediction and unseen spectra synthesis. [1] Clegg S. et al. (2016) Spectrochim. Acta B, 129, 64-85. [2] Boucher T. et al. (2017) J. Chemom., 31, e2877.
Design and optimization of membrane-type acoustic metamaterials
NASA Astrophysics Data System (ADS)
Blevins, Matthew Grant
One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-01-01
Background: C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). Methods: We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). Results: In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03–1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Conclusion: Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP. PMID:20234363
A composite smeared finite element for mass transport in capillary systems and biological tissue.
Kojic, M; Milosevic, M; Simic, V; Koay, E J; Fleming, J B; Nizzero, S; Kojic, N; Ziemys, A; Ferrari, M
2017-09-01
One of the key processes in living organisms is mass transport occurring from blood vessels to tissues for supplying tissues with oxygen, nutrients, drugs, immune cells, and - in the reverse direction - transport of waste products of cell metabolism to blood vessels. The mass exchange from blood vessels to tissue and vice versa occurs through blood vessel walls. This vital process has been investigated experimentally over centuries, and also in the last decades by the use of computational methods. Due to geometrical and functional complexity and heterogeneity of capillary systems, it is however not feasible to model in silico individual capillaries (including transport through the walls and coupling to tissue) within whole organ models. Hence, there is a need for simplified and robust computational models that address mass transport in capillary-tissue systems. We here introduce a smeared modeling concept for gradient-driven mass transport and formulate a new composite smeared finite element (CSFE). The transport from capillary system is first smeared to continuous mass sources within tissue, under the assumption of uniform concentration within capillaries. Here, the fundamental relation between capillary surface area and volumetric fraction is derived as the basis for modeling transport through capillary walls. Further, we formulate the CSFE which relies on the transformation of the one-dimensional (1D) constitutive relations (for transport within capillaries) into the continuum form expressed by Darcy's and diffusion tensors. The introduced CSFE is composed of two volumetric parts - capillary and tissue domains, and has four nodal degrees of freedom (DOF): pressure and concentration for each of the two domains. The domains are coupled by connectivity elements at each node. The fictitious connectivity elements take into account the surface area of capillary walls which belongs to each node, as well as the wall material properties (permeability and partitioning). The overall FE model contains geometrical and material characteristics of the entire capillary-tissue system, with physiologically measurable parameters assigned to each FE node within the model. The smeared concept is implemented into our implicit-iterative FE scheme and into FE package PAK. The first three examples illustrate accuracy of the CSFE element, while the liver and pancreas models demonstrate robustness of the introduced methodology and its applicability to real physiological conditions.
CFD simulation of flow through heart: a perspective review.
Khalafvand, S S; Ng, E Y K; Zhong, L
2011-01-01
The heart is an organ which pumps blood around the body by contraction of muscular wall. There is a coupled system in the heart containing the motion of wall and the motion of blood fluid; both motions must be computed simultaneously, which make biological computational fluid dynamics (CFD) difficult. The wall of the heart is not rigid and hence proper boundary conditions are essential for CFD modelling. Fluid-wall interaction is very important for real CFD modelling. There are many assumptions for CFD simulation of the heart that make it far from a real model. A realistic fluid-structure interaction modelling the structure by the finite element method and the fluid flow by CFD use more realistic coupling algorithms. This type of method is very powerful to solve the complex properties of the cardiac structure and the sensitive interaction of fluid and structure. The final goal of heart modelling is to simulate the total heart function by integrating cardiac anatomy, electrical activation, mechanics, metabolism and fluid mechanics together, as in the computational framework.
Simpson, R.W.; Lienkaemper, J.J.; Galehouse, J.S.
2001-01-01
Variations ill surface creep rate along the Hayward fault are modeled as changes in locking depth using 3D boundary elements. Model creep is driven by screw dislocations at 12 km depth under the Hayward and other regional faults. Inferred depth to locking varies along strike from 4-12 km. (12 km implies no locking.) Our models require locked patches under the central Hayward fault, consistent with a M6.8 earthquake in 1868, but the geometry and extent of locking under the north and south ends depend critically on assumptions regarding continuity and creep behavior of the fault at its ends. For the northern onshore part of the fault, our models contain 1.4-1.7 times more stored moment than the model of Bu??rgmann et al. [2000]; 45-57% of this stored moment resides in creeping areas. It is important for seismic hazard estimation to know how much of this moment is released coseismically or as aseismic afterslip.
Caruso, Roberto; Fida, Roberta; Sili, Alessandro; Arrigoni, Cristina
2016-01-01
Competence is considered a fundamental element when measuring a nurse's or student's ability to provide nursing care, but there is no consensus on what competence really is. This paper aims to review the existing meanings and models of nursing competence. The overview of literature reviews and concept analysis was performed through a search on Pubmed, Cinahl and PsychINFO from January 2005 to September 2014. It included key words, such as: Competence Model; Professional Competence; Nursing Competence; Competency Model; Professional Competency; Nursing Competency. A total of 14 papers were found, coming from educational or clinical nursing field. It was possible to identify some common themes: description of competence determinants; confu- sion around the competence concept; lack in competence evaluation; lack when competence have to be operationalized. The overview results, enriched by the literature coming out from the organiza- tional studies, build the conceptual basis of an integrated model of nursing competence. More empirical research is needed to test the theoretical assumptions.
A Squeeze-film Damping Model for the Circular Torsion Micro-resonators
NASA Astrophysics Data System (ADS)
Yang, Fan; Li, Pu
2017-07-01
In recent years, MEMS devices are widely used in many industries. The prediction of squeeze-film damping is very important for the research of high quality factor resonators. In the past, there have been many analytical models predicting the squeeze-film damping of the torsion micro-resonators. However, for the circular torsion micro-plate, the works over it is very rare. The only model presented by Xia et al[7] using the method of eigenfunction expansions. In this paper, The Bessel series solution is used to solve the Reynolds equation under the assumption of the incompressible gas of the gap, the pressure distribution of the gas between two micro-plates is obtained. Then the analytical expression for the damping constant of the device is derived. The result of the present model matches very well with the finite element method (FEM) solutions and the result of Xia’s model, so the present models’ accuracy is able to be validated.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate
NASA Astrophysics Data System (ADS)
Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah
2007-11-01
This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
NASA Technical Reports Server (NTRS)
Gomez, C. F.; Mireles, O. R.; Stewart, E.
2016-01-01
The Space Capable Cryogenic Thermal Engine (SCCTE) effort considers a nuclear thermal rocket design based around a Low-Enriched Uranium (LEU) design fission reactor. The reactor core is comprised of bundled hexagonal fuel elements that directly heat hydrogen for expansion in a thrust chamber and hexagonal tie-tubes that house zirconium hydride moderator mass for the purpose of thermalizing fast neutrons resulting from fission events. Created 3D steady state Hex fuel rod model with 1D flow channels. Hand Calculation were used to set up initial conditions for fluid flow. The Hex Fuel rod uses 1D flow paths to model the channels using empirical correlations for heat transfer in a pipe. Created a 2-D axisymmetric transient to steady state model using the CFD turbulent flow and Heat Transfer module in COMSOL. This model was developed to find and understand the hydrogen flow that might effect the thermal gradients axially and at the end of the tie tube where the flow turns and enters an annulus. The Hex fuel rod and Tie tube models were made based on requirements given to us by CSNR and the SCCTE team. The models helped simplify and understand the physics and assumptions. Using pipe correlations reduced the complexity of the 3-D fuel rod model and is numerically more stable and computationally more time-efficient compared to the CFD approach. The 2-D axisymmetric tie tube model can be used as a reference "Virtual test model" for comparing and improving 3-D Models.
Smith, Wade D.; Miller, Jessica A.; Heppell, Selina S.
2013-01-01
Differences in the chemical composition of calcified skeletal structures (e.g. shells, otoliths) have proven useful for reconstructing the environmental history of many marine species. However, the extent to which ambient environmental conditions can be inferred from the elemental signatures within the vertebrae of elasmobranchs (sharks, skates, rays) has not been evaluated. To assess the relationship between water and vertebral elemental composition, we conducted two laboratory studies using round stingrays, Urobatis halleri, as a model species. First, we examined the effects of temperature (16°, 18°, 24°C) on vertebral elemental incorporation (Li/Ca, Mg/Ca, Mn/Ca, Zn/Ca, Sr/Ca, Ba/Ca). Second, we tested the relationship between water and subsequent vertebral elemental composition by manipulating dissolved barium concentrations (1x, 3x, 6x). We also evaluated the influence of natural variation in growth rate on elemental incorporation for both experiments. Finally, we examined the accuracy of classifying individuals to known environmental histories (temperature and barium treatments) using vertebral elemental composition. Temperature had strong, negative effects on the uptake of magnesium (DMg) and barium (DBa) and positively influenced manganese (DMn) incorporation. Temperature-dependent responses were not observed for lithium and strontium. Vertebral Ba/Ca was positively correlated with ambient Ba/Ca. Partition coefficients (DBa) revealed increased discrimination of barium in response to increased dissolved barium concentrations. There were no significant relationships between elemental incorporation and somatic growth or vertebral precipitation rates for any elements except Zn. Relationships between somatic growth rate and DZn were, however, inconsistent and inconclusive. Variation in the vertebral elemental signatures of U. halleri reliably distinguished individual rays from each treatment based on temperature (85%) and Ba exposure (96%) history. These results support the assumption that vertebral elemental composition reflects the environmental conditions during deposition and validates the use of vertebral elemental signatures as natural markers in an elasmobranch. Vertebral elemental analysis is a promising tool for the study of elasmobranch population structure, movement, and habitat use. PMID:24098320
Smith, Wade D; Miller, Jessica A; Heppell, Selina S
2013-01-01
Differences in the chemical composition of calcified skeletal structures (e.g. shells, otoliths) have proven useful for reconstructing the environmental history of many marine species. However, the extent to which ambient environmental conditions can be inferred from the elemental signatures within the vertebrae of elasmobranchs (sharks, skates, rays) has not been evaluated. To assess the relationship between water and vertebral elemental composition, we conducted two laboratory studies using round stingrays, Urobatis halleri, as a model species. First, we examined the effects of temperature (16°, 18°, 24°C) on vertebral elemental incorporation (Li/Ca, Mg/Ca, Mn/Ca, Zn/Ca, Sr/Ca, Ba/Ca). Second, we tested the relationship between water and subsequent vertebral elemental composition by manipulating dissolved barium concentrations (1x, 3x, 6x). We also evaluated the influence of natural variation in growth rate on elemental incorporation for both experiments. Finally, we examined the accuracy of classifying individuals to known environmental histories (temperature and barium treatments) using vertebral elemental composition. Temperature had strong, negative effects on the uptake of magnesium (DMg) and barium (DBa) and positively influenced manganese (DMn) incorporation. Temperature-dependent responses were not observed for lithium and strontium. Vertebral Ba/Ca was positively correlated with ambient Ba/Ca. Partition coefficients (DBa) revealed increased discrimination of barium in response to increased dissolved barium concentrations. There were no significant relationships between elemental incorporation and somatic growth or vertebral precipitation rates for any elements except Zn. Relationships between somatic growth rate and DZn were, however, inconsistent and inconclusive. Variation in the vertebral elemental signatures of U. halleri reliably distinguished individual rays from each treatment based on temperature (85%) and Ba exposure (96%) history. These results support the assumption that vertebral elemental composition reflects the environmental conditions during deposition and validates the use of vertebral elemental signatures as natural markers in an elasmobranch. Vertebral elemental analysis is a promising tool for the study of elasmobranch population structure, movement, and habitat use.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Fricke, Moritz B; Rolfes, Raimund
2015-03-01
An approach for the prediction of underwater noise caused by impact pile driving is described and validated based on in situ measurements. The model is divided into three sub-models. The first sub-model, based on the finite element method, is used to describe the vibration of the pile and the resulting acoustic radiation into the surrounding water and soil column. The mechanical excitation of the pile by the piling hammer is estimated by the second sub-model using an analytical approach which takes the large vertical dimension of the ram into account. The third sub-model is based on the split-step Padé solution of the parabolic equation and targets the long-range propagation up to 20 km. In order to presume realistic environmental properties for the validation, a geoacoustic model is derived from spatially averaged geological information about the investigation area. Although it can be concluded from the validation that the model and the underlying assumptions are appropriate, there are some deviations between modeled and measured results. Possible explanations for the observed errors are discussed.
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
An Information-Based Machine Learning Approach to Elasticity Imaging
Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.
2016-01-01
An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175
Escape of Hydrogen from the Exosphere of Mars
NASA Astrophysics Data System (ADS)
Bhattacharyya, Dolon; Clarke, John T.; Bertaux, Jean-Loup; Chaufray, Jean-Yves; Mayyasi-Matta, Majd A.
2016-10-01
After decades of exploration, the martian neutral hydrogen exosphere has remained largely uncharacterized even today. In my dissertation I have attempted to constrain the characteristics of the martian hydrogen exosphere using Hubble Space Telescope observations obtained during October-November 2007 and 2014. These observations reveal short-term seasonal changes exhibited by the martian hydrogen exosphere that are inconsistent with the diffusion-limited escape scenario. This seasonal behavior adds a new element towards backtracking the history of water loss from Mars. Modeling of the data also indicates the likely presence of a superthermal population of hydrogen created by non-thermal processes at Mars, another key element to understand the present-day escape. Exploration of the latitudinal symmetry of the martian exosphere indicates that it is symmetric above 2.5 martian radii and asymmetric below this altitude, which could be due to temperature differences between the day and night sides. Finally, the large uncertainties in determining the characteristics of the martian exosphere after decades of exploration is due to various assumptions about the intrinsic characteristics of the martian exosphere in the modeling process, degeneracy in the two modeling parameters temperature and density of the hydrogen atoms, unaccounted seasonal effects, and uncertainties introduced from spacecraft instrumentation as well as their viewing geometry.
The U/Th production ratio and the age of the Milky Way from meteorites and Galactic halo stars
NASA Astrophysics Data System (ADS)
Dauphas, Nicolas
2005-06-01
Some heavy elements (with atomic number A > 69) are produced by the `rapid' (r)-process of nucleosynthesis, where lighter elements are bombarded with a massive flux of neutrons. Although this is characteristic of supernovae and neutron star mergers, uncertainties in where the r-process occurs persist because stellar models are too crude to allow precise quantification of this phenomenon. As a result, there are many uncertainties and assumptions in the models used to calculate the production ratios of actinides (like uranium-238 and thorium-232). Current estimates of the U/Th production ratio range from ~0.4 to 0.7. Here I show that the U/Th abundance ratio in meteorites can be used, in conjunction with observations of low-metallicity stars in the halo of the Milky Way, to determine the U/Th production ratio very precisely . This value can be used in future studies to constrain the possible nuclear mass formulae used in r-process calculations, to help determine the source of Galactic cosmic rays, and to date circumstellar grains. I also estimate the age of the Milky Way ( in a way that is independent of the uncertainties associated with fluctuations in the microwave background or models of stellar evolution.
NASA Astrophysics Data System (ADS)
Alfat, Sayahdin; Kimura, Masato; Firihu, Muhammad Zamrun; Rahmat
2018-05-01
In engineering area, investigation of shape effect in elastic materials was very important. It can lead changing elasticity and surface energy, and also increase of crack propagation in the material. A two-dimensional mathematical model was developed to investigation of elasticity and surface energy in elastic material by Adaptive Finite Element Method. Besides that, behavior of crack propagation has observed for every those materials. The government equations were based on a phase field approach in crack propagation model that developed by Takaishi-Kimura. This research has varied four shape domains where physical properties of materials were same (Young's modulus E = 70 GPa and Poisson's ratio ν = 0.334). Investigation assumptions were; (1) homogeneous and isotropic material, (2) there was not initial cracking at t = 0, (3) initial displacement was zero [u1, u2] = 0) at initial condition (t = 0), and (4) length of time simulation t = 5 with interval Δt = 0.005. Mode I/II or mixed mode crack propagation has been used for the numerical investigation. Results of this studies were very good and accurate to show changing energy and behavior of crack propagation. In the future time, this research can be developed to complex phenomena and domain. Furthermore, shape optimization can be investigation by the model.
NASA Astrophysics Data System (ADS)
Solomou, Alexandros G.; Machairas, Theodoros T.; Karakalas, Anargyros A.; Saravanos, Dimitris A.
2017-06-01
A thermo-mechanically coupled finite element (FE) for the simulation of multi-layered shape memory alloy (SMA) beams admitting large displacements and rotations (LDRs) is developed to capture the geometrically nonlinear effects which are present in many SMA applications. A generalized multi-field beam theory implementing a SMA constitutive model based on small strain theory, thermo-mechanically coupled governing equations and multi-field kinematic hypotheses combining first order shear deformation assumptions with a sixth order polynomial temperature field through the thickness of the beam section are extended to admit LDRs. The co-rotational formulation is adopted, where the motion of the beam is decomposed to rigid body motion and relative small deformation in the local frame. A new generalized multi-layered SMA FE is formulated. The nonlinear transient spatial discretized equations of motion of the SMA structure are synthesized and solved using the Newton-Raphson method combined with an implicit time integration scheme. Correlations of models incorporating the present beam FE with respective results of models incorporating plane stress SMA FEs, demonstrate excellent agreement of the predicted LDRs response, temperature and phase transformation fields, as well as, significant gains in computational time.
3D finite element modelling of sheet metal blanking process
NASA Astrophysics Data System (ADS)
Bohdal, Lukasz; Kukielka, Leon; Chodor, Jaroslaw; Kulakowska, Agnieszka; Patyk, Radoslaw; Kaldunski, Pawel
2018-05-01
The shearing process such as the blanking of sheet metals has been used often to prepare workpieces for subsequent forming operations. The use of FEM simulation is increasing for investigation and optimizing the blanking process. In the current literature a blanking FEM simulations for the limited capability and large computational cost of the three dimensional (3D) analysis has been largely limited to two dimensional (2D) plane axis-symmetry problems. However, a significant progress in modelling which takes into account the influence of real material (e.g. microstructure of the material), physical and technological conditions can be obtained by using 3D numerical analysis methods in this area. The objective of this paper is to present 3D finite element analysis of the ductile fracture, strain distribution and stress in blanking process with the assumption geometrical and physical nonlinearities. The physical, mathematical and computer model of the process are elaborated. Dynamic effects, mechanical coupling, constitutive damage law and contact friction are taken into account. The application in ANSYS/LS-DYNA program is elaborated. The effect of the main process parameter a blanking clearance on the deformation of 1018 steel and quality of the blank's sheared edge is analyzed. The results of computer simulations can be used to forecasting quality of the final parts optimization.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
Handling Conflict in the Work Environment.
ERIC Educational Resources Information Center
Brewer, Ernest W.
1997-01-01
Discussion of workplace conflict management examines erroneous assumptions inherent in traditional reaction patterns, considers key elements of planning for conflict prevention, and some workplace strategies to help minimize conflicts. Several approaches to conflict management, and their outcomes, are highlighted, and stages of the…
Teaching Critical Reflection. Trends and Issues Alerts.
ERIC Educational Resources Information Center
Imel, Susan
Recently, the topic of reflection and the development of reflective practitioners have received a great deal of attention. Four elements are central to critical reflection: assumption analysis, contextual awareness, imaginative speculation, and reflective skepticism. Definitions of critical reflection often reveal differing theoretical…
Vehicle information exchange needs for mobility applications : version 3.0.
DOT National Transportation Integrated Search
1996-06-01
The Evaluatory Design Document provides a unifying set of assumptions for other evaluations to utilize. Many of the evaluation activities require the definition of an actual implementation in order to be performed. For example, to cost the elements o...
Assessment of historical masonry pillars reinforced by CFRP strips
NASA Astrophysics Data System (ADS)
Fedele, Roberto; Rosati, Giampaolo; Biolzi, Luigi; Cattaneo, Sara
2014-10-01
In this methodological study, the ultimate response of masonry pillars strengthened by externally bonded Carbon Fiber Reinforced Polymer (CFRP) was investigated. Historical bricks were derived from a XVII century rural building, whilst a high strength mortar was utilized for the joints. The conventional experimental information, concerning the overall reaction force and relative displacements provided by "point" sensors (LVDTs and clip gauge), were herein enriched with no-contact, full-field kinematic measurements provided by 2D Digital Image Correlation (2D DIC). Experimental information were critically compared with prediction provided by an advanced three-dimensional models, based on nonlinear finite elements under the simplifying assumption of perfect adhesion between the reinforcement and the support.
The System Dynamics Research on the Private Cars' Amount in Beijing
NASA Astrophysics Data System (ADS)
Fan, Jie; Yan, Guang-Le
The thesis analyzes the development problem of private cars’ amount in Beijing from the perspective of system dynamics. With the flow chart illustrating the relationships of relevant elements, the SD model is established by VENSIM to simulate the growth trend of private autos’ amount in the future on the background of “Public Transportation First” policy based on the original data in Beijing. Then the article discusses the forecasting impacts of “Single-and-double license plate number limit” on the number of city vehicles and private cars under the assumption that this policy implemented for long after the 2008 Olympic Games. Finally, some recommendations are put forward for proper control over this problem.
NASA Technical Reports Server (NTRS)
Huss, G. R.; Alexander, E. C., Jr.
1985-01-01
The development of models as tracers of nobel gases through the Earth's evolution is discussed. A new set of paradigms embodying present knowledge was developed. Several important areas for future research are: (1) measurement of the elemental and isotopic compositions of the five noble gases in a large number of terrestrial materials, thus better defining the composition and distribution of terrestrial noble gases; (2) determinations of relative diffusive behavior, chemical behavior, and the distribution between solid and melt of noble gases under mantle conditions are urgently needed; (3) disequilibrium behavior in the nebula needs investigation, and the behavior of plasmas and possible cryotrapping on cold nebular solids are considered.
Accurate Modeling of X-ray Extinction by Interstellar Grains
NASA Astrophysics Data System (ADS)
Hoffman, John; Draine, B. T.
2016-02-01
Interstellar abundance determinations from fits to X-ray absorption edges often rely on the incorrect assumption that scattering is insignificant and can be ignored. We show instead that scattering contributes significantly to the attenuation of X-rays for realistic dust grain size distributions and substantially modifies the spectrum near absorption edges of elements present in grains. The dust attenuation modules used in major X-ray spectral fitting programs do not take this into account. We show that the consequences of neglecting scattering on the determination of interstellar elemental abundances are modest; however, scattering (along with uncertainties in the grain size distribution) must be taken into account when near-edge extinction fine structure is used to infer dust mineralogy. We advertise the benefits and accuracy of anomalous diffraction theory for both X-ray halo analysis and near edge absorption studies. We present an open source Fortran suite, General Geometry Anomalous Diffraction Theory (GGADT), that calculates X-ray absorption, scattering, and differential scattering cross sections for grains of arbitrary geometry and composition.
Computational compliance criteria in water hammer modelling
NASA Astrophysics Data System (ADS)
Urbanowicz, Kamil
2017-10-01
Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.
Effectiveness of nonporous windscreens for infrasonic measurements.
Dauchez, Nicolas; Hayot, Maxime; Denis, Stéphane
2016-06-01
This paper deals with nonporous windscreens used for reducing noise in infrasonic measurements. A model of sound transmission using a modal approach is derived. The system is a square plate coupled with a cavity. The model agrees with finite element simulations and measurements performed on two windscreens: a cubic windscreen using a material recommended by Shams, Zuckerwar, and Sealey [J. Acoust. Soc. Am. 118, 1335-1340 (2005)] and an optimized flat windscreen made out of aluminum. Only the latter was found to couple acoustical waves below 10 Hz without any attenuation. Moreover, wind noise reduction measurements show that nonporous windscreens perform similarly as a pipe array by averaging the pressure fluctuations. These results question the assumptions of Shams et al. and Zuckerwar [J. Acoust. Soc. Am. 127, 3327-3334 (2010)] about compact nonporous windscreens design and effectiveness.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.; Hubbard, W. B.
1983-01-01
A numerical technique for solving the Thomas-Fermi-Dirac (TED) equation in three dimensions, for an array of ions obeying periodic boundary conditions, is presented. The technique is then used to calculate deviations from ideal mixing for an alloy of hydrogen and helium at zero temperature and high presures. Results are compared with alternative models which apply perturbation theory to calculation of the electron distribution, based upon the assumption of weak response of the electron gas to the ions. The TFD theory, which permits strong electron response, always predicts smaller deviations from ideal mixing than would be predicted by perturbation theory. The results indicate that predicted phase separation curves for hydrogen-helium alloys under conditions prevailing in the metallic zones of Jupiter and Saturn are very model dependent.
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
NASA Astrophysics Data System (ADS)
Robbins, Joshua; Voth, Thomas
2011-06-01
Material response to dynamic loading is often dominated by microstructure such as grain topology, porosity, inclusions, and defects; however, many models rely on assumptions of homogeneity. We use the probabilistic finite element method (WK Liu, IJNME, 1986) to introduce local uncertainty to account for material heterogeneity. The PFEM uses statistical information about the local material response (i.e., its expectation, coefficient of variation, and autocorrelation) drawn from knowledge of the microstructure, single crystal behavior, and direct numerical simulation (DNS) to determine the expectation and covariance of the system response (velocity, strain, stress, etc). This approach is compared to resolved grain-scale simulations of the equivalent system. The microstructures used for the DNS are produced using Monte Carlo simulations of grain growth, and a sufficient number of realizations are computed to ensure a meaningful comparison. Finally, comments are made regarding the suitability of one-dimensional PFEM for modeling material heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong
2017-01-01
Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567
Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai
2017-12-01
Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...
2016-05-01
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
Clarity of objectives and working principles enhances the success of biomimetic programs.
Wolff, Jonas O; Wells, David; Reid, Chris R; Blamires, Sean J
2017-09-26
Biomimetics, the transfer of functional principles from living systems into product designs, is increasingly being utilized by engineers. Nevertheless, recurring problems must be overcome if it is to avoid becoming a short-lived fad. Here we assess the efficiency and suitability of methods typically employed by examining three flagship examples of biomimetic design approaches from different disciplines: (1) the creation of gecko-inspired adhesives; (2) the synthesis of spider silk, and (3) the derivation of computer algorithms from natural self-organizing systems. We find that identification of the elemental working principles is the most crucial step in the biomimetic design process. It bears the highest risk of failure (e.g. losing the target function) due to false assumptions about the working principle. Common problems that hamper successful implementation are: (i) a discrepancy between biological functions and the desired properties of the product, (ii) uncertainty about objectives and applications, (iii) inherent limits in methodologies, and (iv) false assumptions about the biology of the models. Projects that aim for multi-functional products are particularly challenging to accomplish. We suggest a simplification, modularisation and specification of objectives, and a critical assessment of the suitability of the model. Comparative analyses, experimental manipulation, and numerical simulations followed by tests of artificial models have led to the successful extraction of working principles. A searchable database of biological systems would optimize the choice of a model system in top-down approaches that start at an engineering problem. Only when biomimetic projects become more predictable will there be wider acceptance of biomimetics as an innovative problem-solving tool among engineers and industry.
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
Modeling fission product vapor transport in the Falcon facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepherd, I.M.; Drossinos, Y.; Benson, C.G.
1995-05-01
An extensive database of aerosol Experiments exists and has been used for checking aerosol transport codes. Data for fission product vapor transport are harder to find. Some qualitative data are available, but the Falcon thermal gradient tube tests carried out at AEA Technology`s laboratories in Winfrith, England, mark the first serious attempt to provide a set of experiments suitable for the validation of codes that predict the transport and condensation of realistic mixtures of fission product vapors. Four of these have been analyzed to check how well the computer code VICTORIA can predict the most important phenomena. Of the fourmore » experiments studied, two are reference cases (FAL-17 and FAL-19), one is a case without boric acid (FAL-18), and the other is run in a reducing atmosphere (FAL-20). The results show that once the vapors condense onto aerosols, VICTORIA can predict their deposition rather well. The dominant mechanism is thermophoresis, and each element deposits with more or less the same deposition velocity. The behavior of the vapors is harder to interpret. Essentially, it is important to know the temperature at which each element condenses. It is clear from the measurements that this temperature changed from test to test-caused mostly by the different speciation as the composition of the carrier gas and the relative concentration of other fission products changed. Only in the test with a steam atmosphere and without boric acid was the assumption valid that most of the iodine is cesium iodide and most of the cesium is cesium hydroxide. In general, VICTORIA predicts that, with the exception of cesium, there will be less variation in the speciation-and, hence, variation in the deposition-between tests than is in fact observed. VICTORIA underpredicts the volatility of most elements, and this is partly a consequence of the ideal solution assumption and partly an overestimation of vapor/aerosol interactions.« less
Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models
NASA Astrophysics Data System (ADS)
Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.
2015-11-01
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.
An economic order quantity model with shortage and inflation
NASA Astrophysics Data System (ADS)
Wulan, Elis Ratna; Nurjaman, Wildan
2015-09-01
The effect of inflation has become a persistent characteristic and more significant problem of many developing economies especially in the third world countries. While making effort to achieve optimal quantity of product to be produced or purchased using the simplest and on the shelf classical EOQ model, the non-inclusion of conflicting economic realities as shortage and inflation has rendered its result quite uneconomical and hence the purpose for this study. Mathematical expression was developed for each of the cost components the sum of which become the total inventory model over the period (0,L) ((TIC(0,L)). L is planning horizon and TIC(0,L) is total inventory cost over a period of (0,L). Significant savings with increase in quantity was achieved based on deference in the varying price regime. With the assumptions considered and subject to the availability of reliable inventory cost element, the developed model is found to produce a feasible, and economic inventory stock-level with the numerical example of a material supply of a manufacturing company.
Modeling of a self-healing process in blast furnace slag cement exposed to accelerated carbonation
NASA Astrophysics Data System (ADS)
Zemskov, Serguey V.; Ahmad, Bilal; Copuroglu, Oguzhan; Vermolen, Fred J.
2013-02-01
In the current research, a mathematical model for the post-damage improvement of the carbonated blast furnace slag cement (BFSC) exposed to accelerated carbonation is constructed. The study is embedded within the framework of investigating the effect of using lightweight expanded clay aggregate, which is incorporated into the impregnation of the sodium mono-fluorophosphate (Na-MFP) solution. The model of the self-healing process is built under the assumption that the position of the carbonation front changes in time where the rate of diffusion of Na-MFP into the carbonated cement matrix and the reaction rates of the free phosphate and fluorophosphate with the components of the cement are comparable to the speed of the carbonation front under accelerated carbonation conditions. The model is based on an initial-boundary value problem for a system of partial differential equations which is solved using a Galerkin finite element method. The results obtained are discussed and generalized to a three-dimensional case.
Effects of stiffness and volume on the transit time of an erythrocyte through a slit.
Salehyar, Sara; Zhu, Qiang
2017-06-01
By using a fully coupled fluid-cell interaction model, we numerically simulate the dynamic process of a red blood cell passing through a slit driven by an incoming flow. The model is achieved by combining a multiscale model of the composite cell membrane with a boundary element fluid dynamics model based on the Stokes flow assumption. Our concentration is on the correlation between the transit time (the time it takes to finish the whole translocation process) and different conditions (flow speed, cell orientation, cell stiffness, cell volume, etc.) that are involved. According to the numerical prediction (with some exceptions), the transit time rises as the cell is stiffened. It is also highly sensitive to volume increase inside the cell. In general, even slightly swollen cells (i.e., the internal volume is increased while the surface area of the cell kept unchanged) travel dramatically slower through the slit. For these cells, there is also an increased chance of blockage.
A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies
NASA Astrophysics Data System (ADS)
Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.
2018-06-01
We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.
Neural models on temperature regulation for cold-stressed animals
NASA Technical Reports Server (NTRS)
Horowitz, J. M.
1975-01-01
The present review evaluates several assumptions common to a variety of current models for thermoregulation in cold-stressed animals. Three areas covered by the models are discussed: signals to and from the central nervous system (CNS), portions of the CNS involved, and the arrangement of neurons within networks. Assumptions in each of these categories are considered. The evaluation of the models is based on the experimental foundations of the assumptions. Regions of the nervous system concerned here include the hypothalamus, the skin, the spinal cord, the hippocampus, and the septal area of the brain.
Lower extremity finite element model for crash simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Perfect, S.A.
1996-03-01
A lower extremity model has been developed to study occupant injury mechanisms of the major bones and ligamentous soft tissues resulting from vehicle collisions. The model is based on anatomically correct digitized bone surfaces of the pelvis, femur, patella and the tibia. Many muscles, tendons and ligaments were incrementally added to the basic bone model. We have simulated two types of occupant loading that occur in a crash environment using a non-linear large deformation finite element code. The modeling approach assumed that the leg was passive during its response to the excitation, that is, no active muscular contraction and thereforemore » no active change in limb stiffness. The approach recognized that the most important contributions of the muscles to the lower extremity response are their ability to define and modify the impedance of the limb. When nonlinear material behavior in a component of the leg model was deemed important to response, a nonlinear constitutive model was incorporated. The accuracy of these assumptions can be verified only through a review of analysis results and careful comparison with test data. As currently defined, the model meets the objective for which it was created. Much work remains to be done, both from modeling and analysis perspectives, before the model can be considered complete. The model implements a modeling philosophy that can accurately capture both kinematic and kinetic response of the lower limb. We have demonstrated that the lower extremity model is a valuable tool for understanding the injury processes and mechanisms. We are now in a position to extend the computer simulation to investigate the clinical fracture patterns observed in actual crashes. Additional experience with this model will enable us to make a statement on what measures are needed to significantly reduce lower extremity injuries in vehicle crashes. 6 refs.« less
NASA Astrophysics Data System (ADS)
Dumitrache, P.; Goanţă, A. M.
2017-08-01
The ability of the cabins to insure the operator protection in the case of the shock loading that appears at the roll-over of the machine or when the cab is struck by the falling objects, it’s one of the most important performance criterions that it must comply by the machines and the mobile equipments. The experimental method provides the most accurate information on the behaviour of protective structures, but generates high costs due to experimental installations and structures which may be compromised during the experiments. In these circumstances, numerical simulation of the actual problem (mechanical shock applied to a strength structure) is a perfectly viable alternative, given that the hardware and software current performances provides the necessary support to obtain results with an acceptable level of accuracy. In this context, the paper proposes using FEA platforms for virtual testing of the actual strength structures of the cabins using their finite element models based on 3D models generated in CAD environments. In addition to the economic advantage above mentioned, although the results obtained by simulation using the finite element method are affected by a number of simplifying assumptions, the adequate modelling of the phenomenon can be a successful support in the design process of structures to meet safety performance criteria imposed by current standards. In the first section of the paper is presented the general context of the security performance requirements imposed by current standards on the cabins strength structures. The following section of the paper is dedicated to the peculiarities of finite element modelling in problems that impose simulation of the behaviour of structures subjected to shock loading. The final section of the paper is dedicated to a case study and to the future objectives.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
An evaluation of complementary relationship assumptions
NASA Astrophysics Data System (ADS)
Pettijohn, J. C.; Salvucci, G. D.
2004-12-01
Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.
Evaluation of agile designs in first-in-human (FIH) trials--a simulation study.
Perlstein, Itay; Bolognese, James A; Krishna, Rajesh; Wagner, John A
2009-12-01
The aim of the investigation was to evaluate alternatives to standard first-in-human (FIH) designs in order to optimize the information gained from such studies by employing novel agile trial designs. Agile designs combine adaptive and flexible elements to enable optimized use of prior information either before and/or during conduct of the study to seamlessly update the study design. A comparison of the traditional 6 + 2 (active + placebo) subjects per cohort design with alternative, reduced sample size, agile designs was performed by using discrete event simulation. Agile designs were evaluated for specific adverse event models and rates as well as dose-proportional, saturated, and steep-accumulation pharmacokinetic profiles. Alternative, reduced sample size (hereafter referred to as agile) designs are proposed for cases where prior knowledge about pharmacokinetics and/or adverse event relationships are available or appropriately assumed. Additionally, preferred alternatives are proposed for a general case when prior knowledge is limited or unavailable. Within the tested conditions and stated assumptions, some agile designs were found to be as efficient as traditional designs. Thus, simulations demonstrated that the agile design is a robust and feasible approach to FIH clinical trials, with no meaningful loss of relevant information, as it relates to PK and AE assumptions. In some circumstances, applying agile designs may decrease the duration and resources required for Phase I studies, increasing the efficiency of early clinical development. We highlight the value and importance of useful prior information when specifying key assumptions related to safety, tolerability, and PK.
NASA Astrophysics Data System (ADS)
Bennett, D. L.; Brene, N.; Nielsen, H. B.
1987-01-01
The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, John
A measurement of the top quark mass in tmore » $$\\bar{t}$$ → l + jets candidate events, obtained from p$$\\bar{p}$$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector, is presented. The measurement approach is that of a matrix element method. For each candidate event, a two dimensional likelihood is calculated in the top pole mass and a constant scale factor, 'JES', where JES multiplies the input particle jet momenta and is designed to account for the systematic uncertainty of the jet momentum reconstruction. As with all matrix element techniques, the method involves an integration using the Standard Model matrix element for t$$\\bar{t}$$ production and decay. However, the technique presented is unique in that the matrix element is modified to compensate for kinematic assumptions which are made to reduce computation time. Background events are dealt with through use of an event observable which distinguishes signal from background, as well as through a cut on the value of an event's maximum likelihood. Results are based on a 955 pb -1 data sample, using events with a high-p T lepton and exactly four high-energy jets, at least one of which is tagged as coming from a b quark; 149 events pass all the selection requirements. They find M meas = 169.8 ± 2.3(stat.) ± 1.4(syst.) GeV/c 2.« less
AgMIP 1.5°C Assessment: Mitigation and Adaptation at Coordinated Global and Regional Scales
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2016-12-01
The AgMIP 1.5°C Coordinated Global and Regional Integrated Assessments of Climate Change and Food Security (AgMIP 1.5 CGRA) is linking site-based crop and livestock models with similar models run on global grids, and then links these biophysical components with economics models and nutrition metrics at regional and global scales. The AgMIP 1.5 CGRA assessment brings together experts in climate, crop, livestock, economics, nutrition, and food security to define the 1.5°C Protocols and guide the process throughout the assessment. Scenarios are designed to consistently combine elements of intertwined storylines of future society including socioeconomic development (Shared Socioeconomic Pathways), greenhouse gas concentrations (Representative Concentration Pathways), and specific pathways of agricultural sector development (Representative Agricultural Pathways). Shared Climate Policy Assumptions will be extended to provide additional agricultural detail on mitigation and adaptation strategies. The multi-model, multi-disciplinary, multi-scale integrated assessment framework is using scenarios of economic development, adaptation, mitigation, food policy, and food security. These coordinated assessments are grounded in the expertise of AgMIP partners around the world, leading to more consistent results and messages for stakeholders, policymakers, and the scientific community. The early inclusion of nutrition and food security experts has helped to ensure that assessment outputs include important metrics upon which investment and policy decisions may be based. The CGRA builds upon existing AgMIP research groups (e.g., the AgMIP Wheat Team and the AgMIP Global Gridded Crop Modeling Initiative; GGCMI) and regional programs (e.g., AgMIP Regional Teams in Sub-Saharan Africa and South Asia), with new protocols for cross-scale and cross-disciplinary linkages to ensure the propagation of expert judgment and consistent assumptions.
NASA Astrophysics Data System (ADS)
Kuhn, Matthew R.; Daouadji, Ali
2018-05-01
The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.
Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane
This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less
The importance of being equivalent: Newton's two models of one-body motion
NASA Astrophysics Data System (ADS)
Pourciau, Bruce
2004-05-01
As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.
The Nonlinear Dynamic Response of an Elastic-Plastic Thin Plate under Impulsive Loading,
1987-06-11
Among those numerical methods, the finite element method is the most effective one. The method presented in this paper is an " influence function " numerical...computational time is much less than the finite element method. Its precision is higher also. II. Basic Assumption and the Influence Function of a Simple...calculation. Fig. 1 3 2. The Influence function of a Simple Supported Plate The motion differential equation of a thin plate can be written as DV’w+ _.eluq() (1
Principles of cost-benefit analysis for ERTS experiments, volumes 1 and 2
NASA Technical Reports Server (NTRS)
1973-01-01
The basic elements of a cost-benefit study are discussed along with special considerations for ERTS experiments. Elements required for a complete economic analysis of ERTS are considered to be: statement of objectives, specification of assumptions, enumeration of system alternatives, benefit analysis, cost analysis nonefficiency considerations, and final system selection. A hypothetical cost-benefit example is presented with the assumed objective of an increase in remote sensing surveys of grazing lands to better utilize available forage to lower meat prices.
ERIC Educational Resources Information Center
Yilmaz, Suha; Tekin-Dede, Ayse
2016-01-01
Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…
Teach the Related Sciences for Graphic Arts
ERIC Educational Resources Information Center
Voss, Lawrence
1974-01-01
At Ferris State College, the chemistry and physics related to the particular skill area of the technically-oriented student are taught. The assumption is that a student will better understand the elements of his trade if his experience is broadened through relevant science information. (KP)
Finite Element Analysis of Magnetoelastic Plate Problems.
1981-08-01
deformation and in the incremental large deformation analysis, respectively. The classical Kirchhoff assumption of the undeformable normal to the midsurface is...current density , is constant across the thickness of the plate and is parallel to the midsurface of the plate; (2) the normal component of the
Building the Virtual Scriptorium
ERIC Educational Resources Information Center
Nikolova-Houston, Tatiana; Houston, Ron
2008-01-01
Manuscripts, archives, and early printed books contain a documentary record of the foundations of human knowledge. Many elements restrict access to this corpus, from preservation concerns to censorship. On the assumption that the widespread availability of knowledge benefits the human condition more than the restriction of knowledge, elements…
Ross, macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens.
Smith, David L; Battle, Katherine E; Hay, Simon I; Barker, Christopher M; Scott, Thomas W; McKenzie, F Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various "Ross-Macdonald" mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955-1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention.
Ross, Macdonald, and a Theory for the Dynamics and Control of Mosquito-Transmitted Pathogens
Smith, David L.; Battle, Katherine E.; Hay, Simon I.; Barker, Christopher M.; Scott, Thomas W.; McKenzie, F. Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various “Ross-Macdonald” mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955–1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention. PMID:22496640
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
ERIC Educational Resources Information Center
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
A complete graphical criterion for the adjustment formula in mediation analysis.
Shpitser, Ilya; VanderWeele, Tyler J
2011-03-04
Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.
NASA Technical Reports Server (NTRS)
Capobianco, Christopher J.; Jones, John H.; Drake, Michael J.
1993-01-01
Low-temperature metal-silicate partition coefficients are extrapolated to magma ocean temperatures. If the low-temperature chemistry data is found to be applicable at high temperatures, an important assumption, then the results indicate that high temperature alone cannot account for the excess siderophile element problem of the upper mantle. For most elements, a rise in temperature will result in a modest increase in siderophile behavior if an iron-wuestite redox buffer is paralleled. However, long-range extrapolation of experimental data is hazardous when the data contains even modest experimental errors. For a given element, extrapolated high-temperature partition coefficients can differ by orders of magnitude, even when data from independent studies is consistent within quoted errors. In order to accurately assess siderophile element behavior in a magma ocean, it will be necessary to obtain direct experimental measurements for at least some of the siderophile elements.
Determining informative priors for cognitive models.
Lee, Michael D; Vanpaemel, Wolf
2018-02-01
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
NASA Astrophysics Data System (ADS)
Hussein, M. F. M.; François, S.; Schevenels, M.; Hunt, H. E. M.; Talbot, J. P.; Degrande, G.
2014-12-01
This paper presents an extension of the Pipe-in-Pipe (PiP) model for calculating vibrations from underground railways that allows for the incorporation of a multi-layered half-space geometry. The model is based on the assumption that the tunnel displacement is not influenced by the existence of a free surface or ground layers. The displacement at the tunnel-soil interface is calculated using a model of a tunnel embedded in a full space with soil properties corresponding to the soil in contact with the tunnel. Next, a full space model is used to determine the equivalent loads that produce the same displacements at the tunnel-soil interface. The soil displacements are calculated by multiplying these equivalent loads by Green's functions for a layered half-space. The results and the computation time of the proposed model are compared with those of an alternative coupled finite element-boundary element model that accounts for a tunnel embedded in a multi-layered half-space. While the overall response of the multi-layered half-space is well predicted, spatial shifts in the interference patterns are observed that result from the superposition of direct waves and waves reflected on the free surface and layer interfaces. The proposed model is much faster and can be run on a personal computer with much less use of memory. Therefore, it is a promising design tool to predict vibration from underground tunnels and to assess the performance of vibration countermeasures in an early design stage.
NASA Astrophysics Data System (ADS)
Romanowicz, B. A.; Jiménez-Pérez, H.; Adourian, S.; Karaoglu, H.; French, S.
2016-12-01
Existing global 3D shear wave velocity models of the earth's mantle generally rely on simple ray theoretical assumptions regarding seismic wave propagation through a heterogeneous medium, and/or consider a limited number of seismic observables, such as surface wave dispersion and/or travel times of body waves (such as P or S) that are well separated on seismograms. While these assumptions are appropriate for resolving long wavelength structure, as evidenced from the good agreement at low degrees between models published in the last 10 years, it is well established that the assumption of ray theory limits the resolution of smaller scale low velocity structures. We recently developed a global radially anisotropic shear wave velocity model (SEMUCB_WM1, French and Romanowicz, 2014, 2015) based on time domain full waveform inversion of 3-component seismograms, including surface waves and overtones down to 60s period, as well as body waveforms down to 30s. At each iteration, the forward wavefield is calculated using the Spectral Element Method (SEM), which ensures the accurate computation of the misfit function. Inversion is performed using a fast converging Gauss-Newton formalism. The use of information from the entire seismogram, weighted according to energy arrivals, provides a unique illumination of the deep mantle, compensating for the uneven distribution of sources and stations. The most striking features of this model are the broad, vertically oriented plume-like conduits that extend from the core-mantle boundary to at least 1000 km depth in the vicinity of some 20 major hotspots located over the large low shear velocity provinces under the Pacific and Africa. We here present the results of various tests aimed at evaluating the robustness of these features. These include starting from a different initial model, to evaluate the effects of non-linearity in the inversion, as well as synthetic tests aimed at evaluating the recovery of plumes located in the middle of the Pacific ocean. We argue that the plumes can be better resolved than in models developed using classical approaches, due to the particular combination of theory and dataset. We discuss the geodynamical consequences of their attributes, which contrast with those of purely thermal plumes in a medium with simple temperature and pressure dependent rheology.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
147Sm-143Nd systematics of Earth are inconsistent with a superchondritic Sm/Nd ratio
Huang, Shichun; Jacobsen, Stein B.; Mukhopadhyay, Sujoy
2013-01-01
The relationship between the compositions of the Earth and chondritic meteorites is at the center of many important debates. A basic assumption in most models for the Earth’s composition is that the refractory elements are present in chondritic proportions relative to each other. This assumption is now challenged by recent 142Nd/144Nd ratio studies suggesting that the bulk silicate Earth (BSE) might have an Sm/Nd ratio 6% higher than chondrites (i.e., the BSE is superchondritic). This has led to the proposal that the present-day 143Nd/144Nd ratio of BSE is similar to that of some deep mantle plumes rather than chondrites. Our reexamination of the long-lived 147Sm-143Nd isotope systematics of the depleted mantle and the continental crust shows that the BSE, reconstructed using the depleted mantle and continental crust, has 143Nd/144Nd and Sm/Nd ratios close to chondritic values. The small difference in the ratio of 142Nd/144Nd between ordinary chondrites and the Earth must be due to a process different from mantle-crust differentiation, such as incomplete mixing of distinct nucleosynthetic components in the solar nebula. PMID:23479630
Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D
2017-01-01
Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.
Latent degradation indicators estimation and prediction: A Monte Carlo approach
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin
2011-01-01
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Interpreting the macroscopic pointer by analysing the elements of reality of a Schrödinger cat
NASA Astrophysics Data System (ADS)
Reid, M. D.
2017-10-01
We examine Einstein-Podolsky-Rosen’s (EPR) steering nonlocality for two realisable Schrödinger cat-type states where a meso/macroscopic system (called the ‘cat’-system) is entangled with a microscopic spin-1/2 system. We follow EPR’s argument and derive the predictions for ‘elements of reality’ that would exist to describe the cat-system, under the assumption of EPR’s local realism. By showing that those predictions cannot be replicated by any local quantum state description of the cat-system, we demonstrate the EPR-steering of the cat-system. For large cat-systems, we find that a local hidden state model is near-satisfied, meaning that a local quantum state description exists (for the cat) whose predictions differ from those of the elements of reality by a vanishingly small amount. For such a local hidden state model, the EPR-steering of the cat vanishes, and the cat-system can be regarded as being in a mixture of ‘dead’ and ‘alive’ states despite it being entangled with the spin system. We therefore propose that a rigorous signature of the Schrödinger cat-type paradox is the EPR-steering of the cat-system and provide two experimental signatures. This leads to a hybrid quantum/classical interpretation of the macroscopic pointer of a measurement device and suggests that many Schrödinger cat-type paradoxes may be explained by microscopic nonlocality.
Experiments and Dynamic Finite Element Analysis of a Wire-Rope Rockfall Protective Fence
NASA Astrophysics Data System (ADS)
Tran, Phuc Van; Maegawa, Koji; Fukada, Saiji
2013-09-01
The imperative need to protect structures in mountainous areas against rockfall has led to the development of various protection methods. This study introduces a new type of rockfall protection fence made of posts, wire ropes, wire netting and energy absorbers. The performance of this rock fence was verified in both experiments and dynamic finite element analysis. In collision tests, a reinforced-concrete block rolled down a natural slope and struck the rock fence at the end of the slope. A specialized system of measuring instruments was employed to accurately measure the acceleration of the block without cable connection. In particular, the performance of two energy absorbers, which contribute also to preventing wire ropes from breaking, was investigated to determine the best energy absorber. In numerical simulation, a commercial finite element code having explicit dynamic capabilities was employed to create models of the two full-scale tests. To facilitate simulation, certain simplifying assumptions for mechanical data of each individual component of the rock fence and geometrical data of the model were adopted. Good agreement between numerical simulation and experimental data validated the numerical simulation. Furthermore, the results of numerical simulation helped highlight limitations of the testing method. The results of numerical simulation thus provide a deeper understanding of the structural behavior of individual components of the rock fence during rockfall impact. More importantly, numerical simulations can be used not only as supplements to or substitutes for full-scale tests but also in parametric study and design.
Evaluation of Wearable Simulation Interface for Military Training
2012-01-01
Lampton, 2005). This assumption was based on support from the identi- cal elements theory (Holding, 1965; Thorndike & Woodworth, 1901), which states that...Annual Meeting of the Human Fac- tors and Ergonomics Society (pp. 2014–2018). Santa Monica, CA: Human Factors and Ergonomics Society. Thorndike , E. L
Tree biology and dendrochemistry
Kevin T. Smith; Walter C. Shortle
1996-01-01
Dendrochemistry, the interpretation of elemental analysis of dated tree rings, can provide a temporal record of environmental change. Using the dendrochemical record requires an understanding of tree biology. In this review, we pose four questions concerning assumptions that underlie recent dendrochemical research: 1) Does the chemical composition of the wood directly...
Foam model of planetary formation
NASA Astrophysics Data System (ADS)
Andreev, Y.; Potashko, O.
The Analysis of 2637 terrestrial minerals shows presence of characteristic element and isotope structure for each ore irrespective of its site. The model of processes geo-nuclear syntheses elements is offered due to avalanche merge of nucleus which simply explains these laws. Main assumption: nucleus, atoms, connections, ores and minerals were formed in volume of the modern Earth at an early stage of its evolution from uniform proto-substance. Substantive provisions of the model: 1)The most part of nucleus of atoms of all chemical elements of the Earth's crust were formed on the mechanism of avalanche chain merge practically in one stage (in geological scales) in a course of correlated(in scales of a planet) process with allocation of a plenty of heat. 2) Atoms of chemical elements were generated during cooling a planet with preservation of a relative spatial arrangement of nucleus. 3) Chemical compounds have arisen at cooling a surface of a planet and were accompanied by reorganizations (hashing) macro- and geo-scale. 4) Mineral formations are consequence of correlated behaviour of chemical compounds on microscopic scales during phase transition from gaseous or liquid to a firm condition. 5) Synthesis of chemical elements in deep layers of the Earth occurs till now. "Foaming'' instead of "Big Bang" The physical space is continual gas-fluid environment consist of super fluid foam. The continuity, keeping and uniqueness of proto-substance are postulated. Scenario: primary singularity-> droplets(proto-galaxies) droplets(proto-stars)-> droplets(proto-planets)-> droplets(proto- satellites)-> droplets. Proto-planet substance->proton+electron as 1st generation disintegration result of primary foam. Nuclei or nucleonic crystals are the 2nd generation in result of cascade merge of protons into conglomerates. The theory has applied to the analysis of samples of native copper deposit from Rafalovka's ore deposit in Ukraine. The abundance of elements by use of the roentgen fluorescent microanalysis has been made. Changes of a parity of elements are described by nuclear synthesis reactions: 16O+47Ti, 23Na+40Ca, 24Mg+39K, 31P+32S-> 63Cu; 16O+49Ti, 23Na+42Ca, 26Mg+39K, 31P+34S-> 65Cu Dramatical change of isotope parities of 56Fe and 57Fe in the sites of space carried on 3 millimetres. The content of 57Fe is greater then 56Fe in Cu granule.
Austenite grain growth simulation considering the solute-drag effect and pinning effect.
Fujiyama, Naoto; Nishibata, Toshinobu; Seki, Akira; Hirata, Hiroyuki; Kojima, Kazuhiro; Ogawa, Kazuhiro
2017-01-01
The pinning effect is useful for restraining austenite grain growth in low alloy steel and improving heat affected zone toughness in welded joints. We propose a new calculation model for predicting austenite grain growth behavior. The model is mainly comprised of two theories: the solute-drag effect and the pinning effect of TiN precipitates. The calculation of the solute-drag effect is based on the hypothesis that the width of each austenite grain boundary is constant and that the element content maintains equilibrium segregation at the austenite grain boundaries. We used Hillert's law under the assumption that the austenite grain boundary phase is a liquid so that we could estimate the equilibrium solute concentration at the austenite grain boundaries. The equilibrium solute concentration was calculated using the Thermo-Calc software. Pinning effect was estimated by Nishizawa's equation. The calculated austenite grain growth at 1473-1673 K showed excellent correspondence with the experimental results.
Thermal vacuum chamber repressurization with instrument purging
NASA Astrophysics Data System (ADS)
Woronowicz, Michael S.
2016-09-01
At the conclusion of cryogenic vacuum testing of the James Webb Space Telescope Optical Telescope Element Integrated Science Instrument Module (JWST-OTIS) in NASA Johnson Space Center's (JSCs) thermal vacuum (TV) Chamber A, contamination control (CC) engineers are postulating that chamber particulate material stirred up by the repressurization process may be kept from falling into the Integrated Science Instrument Module (ISIM) interior to some degree by activating instrument purge flows over some initial period before opening the chamber valves. This manuscript describes development of a series of models designed to describe this process. The models are strung together in tandem with a fictitious set of conditions to estimate overpressure evolution from which net outflow velocity behavior may be obtained. Creeping flow assumptions are then used to determine the maximum particle size that may be kept suspended above the ISIM aperture, keeping smaller particles from settling within the instrument module.
Foot Modeling and Smart Plantar Pressure Reconstruction from Three Sensors
Ghaida, Hussein Abou; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
In order to monitor pressure under feet, this study presents a biomechanical model of the human foot. The main elements of the foot that induce the plantar pressure distribution are described. Then the link between the forces applied at the ankle and the distribution of the plantar pressure is established. Assumptions are made by defining the concepts of a 3D internal foot shape, which can be extracted from the plantar pressure measurements, and a uniform elastic medium, which describes the soft tissues behaviour. In a second part, we show that just 3 discrete pressure sensors per foot are enough to generate real time plantar pressure cartographies in the standing position or during walking. Finally, the generated cartographies are compared with pressure cartographies issued from the F-SCAN system. The results show 0.01 daN (2% of full scale) average error, in the standing position. PMID:25400713
NASA Astrophysics Data System (ADS)
Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy
2018-05-01
This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.
Foot modeling and smart plantar pressure reconstruction from three sensors.
Ghaida, Hussein Abou; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
In order to monitor pressure under feet, this study presents a biomechanical model of the human foot. The main elements of the foot that induce the plantar pressure distribution are described. Then the link between the forces applied at the ankle and the distribution of the plantar pressure is established. Assumptions are made by defining the concepts of a 3D internal foot shape, which can be extracted from the plantar pressure measurements, and a uniform elastic medium, which describes the soft tissues behaviour. In a second part, we show that just 3 discrete pressure sensors per foot are enough to generate real time plantar pressure cartographies in the standing position or during walking. Finally, the generated cartographies are compared with pressure cartographies issued from the F-SCAN system. The results show 0.01 daN (2% of full scale) average error, in the standing position.
Theory of resonant x-ray emission spectra in compounds with localized f electrons
NASA Astrophysics Data System (ADS)
Kolorenč, Jindřich
2018-05-01
I discuss a theoretical description of the resonant x-ray emission spectroscopy (RXES) that is based on the Anderson impurity model. The parameters entering the model are determined from material-specific LDA+DMFT calculations. The theory is applicable across the whole f series, not only in the limits of nearly empty (La, Ce) or nearly full (Yb) valence f shell. Its performance is illustrated on the pressure-enhanced intermediate valency of elemental praseodymium. The obtained results are compared to the usual interpretation of RXES, which assumes that the spectrum is a superposition of several signals, each corresponding to one configuration of the 4f shell. The present theory simplifies to such superposition only if nearly all effects of hybridization of the 4f shell with the surrounding states are neglected. Although the assumption of negligible hybridization sounds reasonable for lanthanides, the explicit calculations show that it substantially distorts the analysis of the RXES data.
Thermal Vacuum Chamber Repressurization with Instrument Purging
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.
2014-01-01
At the conclusion of cryogenic vacuum testing of the James Webb Space Telescope Optical Telescope Element Integrated Science Instrument Module (JWST-OTIS) in NASA Johnson Space Center’s (JSCs) thermal vacuum (TV) Chamber A, contamination control (CC) engineers are postulating that chamber particulate material stirred up by the repressurization process may be kept from falling into the Integrated Science Instrument Module (ISIM) interior to some degree by activating instrument purge flows over some initial period before opening the chamber valves. This manuscript describes development of a series of models designed to describe this process. The models are strung together in tandem with a fictitious set of conditions to estimate overpressure evolution from which net outflow velocity behavior may be obtained. Creeping flow assumptions are then used to determine the maximum particle size that may be kept suspended above the ISIM aperture, keeping smaller particles from settling within the instrument module.
Radial basis functions in mathematical modelling of flow boiling in minichannels
NASA Astrophysics Data System (ADS)
Hożejowska, Sylwia; Hożejowski, Leszek; Piasecka, Magdalena
The paper addresses heat transfer processes in flow boiling in a vertical minichannel of 1.7 mm depth with a smooth heated surface contacting fluid. The heated element for FC-72 flowing in a minichannel was a 0.45 mm thick plate made of Haynes-230 alloy. An infrared camera positioned opposite the central, axially symmetric part of the channel measured the plate temperature. K-type thermocouples and pressure converters were installed at the inlet and outlet of the minichannel. In the study radial basis functions were used to solve a problem concerning heat transfer in a heated plate supplied with the controlled direct current. According to the model assumptions, the problem is treated as twodimensional and governed by the Poisson equation. The aim of the study lies in determining the temperature field and the heat transfer coefficient. The results were verified by comparing them with those obtained by the Trefftz method.
Do Reuss and Voigt Bounds Really Bound in High-Pressure Rheology Experiments?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen,J.; Li, L.; Yu, T.
2006-01-01
Energy dispersive synchrotron x-ray diffraction is carried out to measure differential lattice strains in polycrystalline Fe{sub 2}SiO{sub 4} (fayalite) and MgO samples using a multi-element solid state detector during high-pressure deformation. The theory of elastic modeling with Reuss (iso-stress) and Voigt (iso-strain) bounds is used to evaluate the aggregate stress and weight parameter, {alpha} (0{le}{alpha}{le}1), of the two bounds. Results under the elastic assumption quantitatively demonstrate that a highly stressed sample in high-pressure experiments reasonably approximates to an iso-stress state. However, when the sample is plastically deformed, the Reuss and Voigt bounds are no longer valid ({alpha} becomes beyond 1).more » Instead, if plastic slip systems of the sample are known (e.g. in the case of MgO), the aggregate property can be modeled using a visco-plastic self-consistent theory.« less
Fixtureless nonrigid part inspection using depth cameras
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2016-10-01
In automobile industry, flexible thin shell parts are used to cover car body. Such parts could have a different shape in a free state than the design model due to dimensional variation, gravity loads and residual strains. Special inspection fixtures are generally indispensable for geometric inspection. Recently, some researchers have proposed fixtureless nonridged inspect methods using intrinsic geometry or virtual spring-mass system, based on some assumptions about deformation between Free State shape and nominal CAD shape. In this paper, we propose a new fixtureless method to inspect flexible parts with a depth camera, which is efficient and low computational complexity. Unlike traditional method, we gather two point cloud set of the manufactured part in two different states, and make correspondences between them and one of them to the CAD model. The manufacturing defects can be derived from the correspondences. Finite element method (FEM) disappears in our method. Experimental evaluation of the proposed method is presented.
1993-01-01
Assumptions .......................................................... 15 b. Modeling Productivity ...and a macroeconomic model of the U.S. economy, designed to provide long-range projections 3 consistent with trends in production technology, shifts in...investments in roads, bridges, sewer systems, etc. In addition to these modeling assumptions, we also have introduced productivity increases to reflect the
Impact of one-layer assumption on diffuse reflectance spectroscopy of skin
NASA Astrophysics Data System (ADS)
Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.
2015-02-01
Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.
The carbon budget in the outer solar nebula
NASA Technical Reports Server (NTRS)
Simonelli, Damon P.; Pollack, James B.; Mckay, Christopher P.; Reynolds, Ray T.; Summers, Audrey L.
1989-01-01
The compositional contrast between the giant-planet satellites and the significantly rockier Pluto/Charon system is indicative of different formation mechanisms; cosmic abundance calculations, in conjunction with an assumption of the Pluto/Charon system's direct formation from solar nebula condensates, strongly suggest that most of the carbon in the outer solar nebula was in CO form, in keeping with both the inheritance from the dense molecular clouds in the interstellar medium, and/or the Lewis and Prinn (1980) kinetic-inhibition model of solar nebula chemistry. Laboratory studies of carbonaceous chondrites and Comet Halley flyby studies suggest that condensed organic material, rather than elemental carbon, is the most likely candidate for the small percentage of the carbon-bearing solid in the outer solar nebula.
Theoretical study of strength of elastic-plastic water-saturated interface under constrained shear
NASA Astrophysics Data System (ADS)
Dimaki, Andrey V.; Shilko, Evgeny V.; Psakhie, Sergey G.
2016-11-01
This paper presents a theoretical study of shear strength of an elastic-plastic water-filled interface between elastic permeable blocks under compression. The medium is described within the discrete element method. The relationship between the stress-strain state of the solid skeleton and pore pressure of a liquid is described in the framework of the Biot's model of poroelasticity. The simulation demonstrates that shear strength of an elastic-plastic interface depends non-linearly on the values of permeability and loading to a great extent. We have proposed an empirical relation that approximates the obtained results of the numerical simulation in assumption of the interplay of dilation of the material and mass transfer of the liquid.
Little White Lies: Interrogating the (Un)acceptability of Deception in the Context of Dementia.
Seaman, Aaron T; Stone, Anne M
2017-01-01
This metasynthesis surveyed extant literature on deception in the context of dementia and, based on specific inclusion criteria, included 14 articles from 12 research studies. By doing so, the authors accomplished three goals: (a) provided a systematic examination of the literature-to-date on deception in the context of dementia, (b) elucidated the assumptions that have guided this line of inquiry and articulated the way those shape the research findings, and (c) determined directions for future research. In particular, synthesizing across studies allowed the authors to develop a dynamic model comprised of three temporally linear elements-(a) motives, (b) modes, and (c) outcomes that describe how deception emerges communicatively through interaction in the context of dementia. © The Author(s) 2015.
Piezoelectric textured ceramics: Effective properties and application to ultrasonic transducers.
Levassort, Franck; Pham Thi, Mai; Hemery, Henry; Marechal, Pierre; Tran-Huu-Hue, Louis-Pascal; Lethiecq, Marc
2006-12-22
Piezoelectric textured ceramics obtained by homo-template grain growth (HTGG) were recently demonstrated. A simple model with several assumptions has been used to calculate effective parameters of these new materials. Different connectivities have been simulated to show that spatial arrangements between the considered phases have little influence on the effective parameters, even through the 3-0 connectivity delivers the highest electromechanical thickness factor. A transducer based on a textured ceramic sample has been fabricated and characterised to show the efficiency of these piezoelectric materials. Finally, in a single element transducer configuration, simulation shows an improvement of 2 dB sensitivity for a transducer made with textured ceramic in comparison with a similar transducer design based on standard soft PZT (at equivalent bandwidths).
Ferrofluids: Modeling, numerical analysis, and scientific computation
NASA Astrophysics Data System (ADS)
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.
Experimental Methodology for Measuring Combustion and Injection-Coupled Responses
NASA Technical Reports Server (NTRS)
Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.
2006-01-01
A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.
Production of τ τ jj final states at the LHC and the TauSpinner algorithm: the spin-2 case
NASA Astrophysics Data System (ADS)
Bahmani, M.; Kalinowski, J.; Kotlarski, W.; Richter-Wąs, E.; Wąs, Z.
2018-01-01
The TauSpinner algorithm is a tool that allows one to modify the physics model of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, but without the need of re-generating events. With the help of weights τ -lepton production or decay processes can be modified accordingly to a new physics model. In a recent paper a new version TauSpinner ver.2.0.0 has been presented which includes a provision for introducing non-standard states and couplings and study their effects in the vector-boson-fusion processes by exploiting the spin correlations of τ -lepton pair decay products in processes where final states include also two hard jets. In the present paper we document how this can be achieved taking as an example the non-standard spin-2 state that couples to Standard Model particles and tree-level matrix elements with complete helicity information included for the parton-parton scattering amplitudes into a τ -lepton pair and two outgoing partons. This implementation is prepared as the external (user-provided) routine for the TauSpinner algorithm. It exploits amplitudes generated by MadGraph5 and adapted to the TauSpinner algorithm format. Consistency tests of the implemented matrix elements, re-weighting algorithm and numerical results for observables sensitive to τ polarisation are presented.
Participative management in health care services.
Muller, M
1995-03-01
The need and demand for the highest-quality management of all health care delivery activities requires a participative management approach. The purpose with this article is to explore the process of participative management, to generate and describe a model for such management, focusing mainly on the process of participative management, and to formulate guidelines for operationalization of the procedure. An exploratory, descriptive and theory-generating research design is pursued. After a brief literature review, inductive reasoning is mainly employed to identify and define central concepts, followed by the formulation of a few applicable statements and guidelines. Participative management is viewed as a process of that constitutes the elements of dynamic interactive decision-making and problem-solving, shared governance, empowerment, organisational transformation, and dynamic communication within the health care organisation. The scientific method of assessment, planning, implementation and evaluation is utilised throughout the process of participative management. A continuum of interactive decision-making and problem-solving is described, the different role-players involved, as well as the levels of interactive decision-making and problem-solving. The most appropriate decision-making strategy should be employed in pro-active and reactive decision-making. Applicable principles and assumptions in each element of participative management is described. It is recommended that this proposed model for participative management be refined by means of a literature control, interactive dialogue with experts and a model case description or participative management, to ensure the trustworthiness of this research.
The U/Th production ratio and the age of the Milky Way from meteorites and Galactic halo stars.
Dauphas, Nicolas
2005-06-30
Some heavy elements (with atomic number A > 69) are produced by the 'rapid' (r)-process of nucleosynthesis, where lighter elements are bombarded with a massive flux of neutrons. Although this is characteristic of supernovae and neutron star mergers, uncertainties in where the r-process occurs persist because stellar models are too crude to allow precise quantification of this phenomenon. As a result, there are many uncertainties and assumptions in the models used to calculate the production ratios of actinides (like uranium-238 and thorium-232). Current estimates of the U/Th production ratio range from approximately 0.4 to 0.7. Here I show that the U/Th abundance ratio in meteorites can be used, in conjunction with observations of low-metallicity stars in the halo of the Milky Way, to determine the U/Th production ratio very precisely (0.57(+0.037)(-0.031). This value can be used in future studies to constrain the possible nuclear mass formulae used in r-process calculations, to help determine the source of Galactic cosmic rays, and to date circumstellar grains. I also estimate the age of the Milky Way (14.5(+2.8)(-2.2)Gyr in a way that is independent of the uncertainties associated with fluctuations in the microwave background or models of stellar evolution.
Finite Element modelling of deformation induced by interacting volcanic sources
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora
2010-05-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.
Nimmo, John R.
1991-01-01
Luckner et al. [1989] (hereinafter LVN) present a clear summary and generalization of popular formulations used for convenient representation of porous media fluid flow characteristics, including water content (θ) related to suction (h) and hydraulic conductivity (K) related to θ or h. One essential but problematic element in the LVN models is the concept of residual water content (θr; in LVN, θw,r). Most studies using θr determine its value as a fitted parameter and make the assumption that liquid flow processes are negligible at θ values less than θr. While the LVN paper contributes a valuable discussion of the nature of θr, it leaves several problems unresolved, including fundamental difficulties in associating a definite physical condition with θr, practical inadequacies of the models at low θ values, and difficulties in designating a main wetting curve.
End-of-life of starch-polyvinyl alcohol biopolymers.
Guo, M; Stuckey, D C; Murphy, R J
2013-01-01
This study presents a life cycle assessment (LCA) model comparing the waste management options for starch-polyvinyl alcohol (PVOH) biopolymers including landfill, anaerobic digestion (AD), industrial composting and home composting. The ranking of biological treatment routes for starch-PVOH biopolymer wastes depended on their chemical compositions. AD represents the optimum choice for starch-PVOH biopolymer containing N and S elements in global warming potential (GWP(100)), acidification and eutrophication but not on the remaining impact categories, where home composting was shown to be a better option due to its low energy and resource inputs. For those starch-PVOH biopolymers with zero N and S contents home composting delivered the best environmental performance amongst biological treatment routes in most impact categories (except for GWP(100)). The landfill scenario performed generally well due largely to the 100-year time horizon and efficient energy recovery system modeled but this good performance is highly sensitive to assumptions adopted in landfill model. Copyright © 2012 Elsevier Ltd. All rights reserved.
Diffuse-Interface Modelling of Flow in Porous Media
NASA Astrophysics Data System (ADS)
Addy, Doug; Pradas, Marc; Schmuck, Marcus; Kalliadasis, Serafim
2016-11-01
Multiphase flows are ubiquitous in a wide spectrum of scientific and engineering applications, and their computational modelling often poses many challenges associated with the presence of free boundaries and interfaces. Interfacial flows in porous media encounter additional challenges and complexities due to their inherently multiscale behaviour. Here we investigate the dynamics of interfaces in porous media using an effective convective Cahn-Hilliard (CH) equation recently developed in from a Stokes-CH equation for microscopic heterogeneous domains by means of a homogenization methodology, where the microscopic details are taken into account as effective tensor coefficients which are given by a Poisson equation. The equations are decoupled under appropriate assumptions and solved in series using a classic finite-element formulation with the open-source software FEniCS. We investigate the effects of different microscopic geometries, including periodic and non-periodic, at the bulk fluid flow, and find that our model is able to describe the effective macroscopic behaviour without the need to resolve the microscopic details.
Model of bidirectional reflectance distribution function for metallic materials
NASA Astrophysics Data System (ADS)
Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun
2016-09-01
Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.
Identification of differences in health impact modelling of salt reduction
Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636
Design Considerations for Large Computer Communication Networks,
1976-04-01
particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
Emotional Readiness and Music Therapeutic Activities
ERIC Educational Resources Information Center
Drossinou-Korea, Maria; Fragkouli, Aspasia
2016-01-01
The purpose of this study is to understand the children's expression with verbal and nonverbal communication in the Autistic spectrum. We study the emotional readiness and the music therapeutic activities which exploit the elements of music. The method followed focused on the research field of special needs education. Assumptions on the parameters…
Disrupting Charted Systems: Identifying and Deconstructing Critical Incidents in Teaching
ERIC Educational Resources Information Center
Shadiow, Linda K.
2010-01-01
Professional stories that live within multiple retellings throughout one's career can, when the teller analyzes them, be useful in unearthing influential pedagogical assumptions. The author retells a classroom story, examines unacknowledged fears rooted within the story's elements, and uses a five-point framework for analyzing related assumptions…
Finite element simulation of light transfer in turbid media under structured illumination
USDA-ARS?s Scientific Manuscript database
Spatial-frequency domain (SFD) imaging technique allows to estimate the optical properties of biological tissues in a wide field of view. The technique is, however, prone to error in measurement because the two crucial assumptions used for deriving the analytical solution to diffusion approximation ...
Other-Orientation in Nonnative Spanish and Its Effect on Direct Objects
ERIC Educational Resources Information Center
Peace, Meghann M.
2015-01-01
Other-orientation (Linell, 2009) is an essential element of language in that all speakers dialogue with an "other" when communicating. They take into consideration the other's assumed perspective, knowledge, and needs, and manipulate their language in response to these assumptions. This study investigated the extent to which…
EVALUATION OF HOST SPECIFIC PCR-BASED METHODS FOR THE IDENTIFICATION OF FECAL POLLUTION
Microbial Source Tracking (MST) is an approach to determine the origin of fecal pollution impacting a body of water. MST is based on the assumption that, given the appropriate method and indicator, the source of microbial pollution can be identified. One of the key elements of...
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
ERIC Educational Resources Information Center
von Davier, Matthias
2018-01-01
This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the…
Data reduction of room tests for zone model validation
M. Janssens; H. C. Tran
1992-01-01
Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Model specification in oral health-related quality of life research.
Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan
2009-10-01
The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Roy's specific life values and the philosophical assumption of humanism.
Hanna, Debra R
2013-01-01
Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.
Ice Particle Impact on Cloud Water Content Instrumentation
NASA Technical Reports Server (NTRS)
Emery, Edward F.; Miller, Dean R.; Plaskon, Stephen R.; Strapp, Walter; Lillie, Lyle
2004-01-01
Determining the total amount of water contained in an icing cloud necessitates the measurement of both the liquid droplets and ice particles. One commonly accepted method for measuring cloud water content utilizes a hot wire sensing element, which is maintained at a constant temperature. In this approach, the cloud water content is equated with the power required to keep the sense element at a constant temperature. This method inherently assumes that impinging cloud particles remain on the sensing element surface long enough to be evaporated. In the case of ice particles, this assumption requires that the particles do not bounce off the surface after impact. Recent tests aimed at characterizing ice particle impact on a thermally heated wing section, have raised questions about the validity of this assumption. Ice particles were observed to bounce off the heated wing section a very high percentage of the time. This result could have implications for Total Water Content sensors which are designed to capture ice particles, and thus do not account for bouncing or breakup of ice particles. Based on these results, a test was conducted to investigate ice particle impact on the sensing elements of the following hot-wire cloud water content probes: (1) Nevzorov Total Water Content (TWC)/Liquid Water Content (LWC) probe, (2) Science Engineering Associates TWC probe, and (3) Particle Measuring Systems King probe. Close-up video imaging was used to study ice particle impact on the sensing element of each probe. The measured water content from each probe was also determined for each cloud condition. This paper will present results from this investigation and attempt to evaluate the significance of ice particle impact on hot-wire cloud water content measurements.
Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B
2011-01-01
Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
James, Mark Anthony
1999-01-01
A finite element program has been developed to perform quasi-static, elastic-plastic crack growth simulations. The model provides a general framework for mixed-mode I/II elastic-plastic fracture analysis using small strain assumptions and plane stress, plane strain, and axisymmetric finite elements. Cracks are modeled explicitly in the mesh. As the cracks propagate, automatic remeshing algorithms delete the mesh local to the crack tip, extend the crack, and build a new mesh around the new tip. State variable mapping algorithms transfer stresses and displacements from the old mesh to the new mesh. The von Mises material model is implemented in the context of a non-linear Newton solution scheme. The fracture criterion is the critical crack tip opening displacement, and crack direction is predicted by the maximum tensile stress criterion at the crack tip. The implementation can accommodate multiple curving and interacting cracks. An additional fracture algorithm based on nodal release can be used to simulate fracture along a horizontal plane of symmetry. A core of plane strain elements can be used with the nodal release algorithm to simulate the triaxial state of stress near the crack tip. Verification and validation studies compare analysis results with experimental data and published three-dimensional analysis results. Fracture predictions using nodal release for compact tension, middle-crack tension, and multi-site damage test specimens produced accurate results for residual strength and link-up loads. Curving crack predictions using remeshing/mapping were compared with experimental data for an Arcan mixed-mode specimen. Loading angles from 0 degrees to 90 degrees were analyzed. The maximum tensile stress criterion was able to predict the crack direction and path for all loading angles in which the material failed in tension. Residual strength was also accurately predicted for these cases.
A Comparison of Analytical and Experimental Data for a Magnetic Actuator
NASA Technical Reports Server (NTRS)
Groom, Nelson J.; Bloodgood, V. Dale, Jr.
2000-01-01
Theoretical and experimental force-displacement and force-current data are compared for two configurations of a simple horseshoe, or bipolar, magnetic actuator. One configuration utilizes permanent magnet wafers to provide a bias flux and the other configuration has no source of bias flux. The theoretical data are obtained from two analytical models of each configuration. One is an ideal analytical model which is developed under the following assumptions: (1) zero fringing and leakage flux, (2) zero actuator coil mmf loss, and (3) infinite permeability of the actuator core and suspended element flux return path. The other analytical model, called the extended model, is developed by adding loss and leakage factors to the ideal model. The values of the loss and leakage factors are calculated from experimental data. The experimental data are obtained from a magnetic actuator test fixture, which is described in detail. Results indicate that the ideal models for both configurations do not match the experimental data very well. However, except for the range around zero force, the extended models produce a good match. The best match is produced by the extended model of the configuration with permanent magnet flux bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
NASA Astrophysics Data System (ADS)
Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.
2018-04-01
Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.
Animal vocal sequences: not the Markov chains we thought they were
Kershenbaum, Arik; Bowles, Ann E.; Freeberg, Todd M.; Jin, Dezhe Z.; Lameira, Adriano R.; Bohn, Kirsten
2014-01-01
Many animals produce vocal sequences that appear complex. Most researchers assume that these sequences are well characterized as Markov chains (i.e. that the probability of a particular vocal element can be calculated from the history of only a finite number of preceding elements). However, this assumption has never been explicitly tested. Furthermore, it is unclear how language could evolve in a single step from a Markovian origin, as is frequently assumed, as no intermediate forms have been found between animal communication and human language. Here, we assess whether animal taxa produce vocal sequences that are better described by Markov chains, or by non-Markovian dynamics such as the ‘renewal process’ (RP), characterized by a strong tendency to repeat elements. We examined vocal sequences of seven taxa: Bengalese finches Lonchura striata domestica, Carolina chickadees Poecile carolinensis, free-tailed bats Tadarida brasiliensis, rock hyraxes Procavia capensis, pilot whales Globicephala macrorhynchus, killer whales Orcinus orca and orangutans Pongo spp. The vocal systems of most of these species are more consistent with a non-Markovian RP than with the Markovian models traditionally assumed. Our data suggest that non-Markovian vocal sequences may be more common than Markov sequences, which must be taken into account when evaluating alternative hypotheses for the evolution of signalling complexity, and perhaps human language origins. PMID:25143037
Particle precipitation: How the spectrum fit impacts atmospheric chemistry
NASA Astrophysics Data System (ADS)
Wissing, J. M.; Nieder, H.; Yakovchouk, O. S.; Sinnhuber, M.
2016-11-01
Particle precipitation causes atmospheric ionization. Modeled ionization rates are widely used in atmospheric chemistry/climate simulations of the upper atmosphere. As ionization rates are based on particle measurements some assumptions concerning the energy spectrum are required. While detectors measure particles binned into certain energy ranges only, the calculation of a ionization profile needs a fit for the whole energy spectrum. Therefore the following assumptions are needed: (a) fit function (e.g. power-law or Maxwellian), (b) energy range, (c) amount of segments in the spectral fit, (d) fixed or variable positions of intersections between these segments. The aim of this paper is to quantify the impact of different assumptions on ionization rates as well as their consequences for atmospheric chemistry modeling. As the assumptions about the particle spectrum are independent from the ionization model itself the results of this paper are not restricted to a single ionization model, even though the Atmospheric Ionization Module OSnabrück (AIMOS, Wissing and Kallenrode, 2009) is used here. We include protons only as this allows us to trace changes in the chemistry model directly back to the different assumptions without the need to interpret superposed ionization profiles. However, since every particle species requires a particle spectrum fit with the mentioned assumptions the results are generally applicable to all precipitating particles. The reader may argue that the selection of assumptions of the particle fit is of minor interest, but we would like to emphasize on this topic as it is a major, if not the main, source of discrepancies between different ionization models (and reality). Depending on the assumptions single ionization profiles may vary by a factor of 5, long-term calculations may show systematic over- or underestimation in specific altitudes and even for ideal setups the definition of the energy-range involves an intrinsic 25% uncertainty for the ionization rates. The effects on atmospheric chemistry (HOx, NOx and Ozone) have been calculated by 3dCTM, showing that the spectrum fit is responsible for a 8% variation in Ozone between setups, and even up to 50% for extreme setups.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
Contact problem for a composite material with nacre inspired microstructure
NASA Astrophysics Data System (ADS)
Berinskii, Igor; Ryvkin, Michael; Aboudi, Jacob
2017-12-01
Bi-material composites with nacre inspired brick and mortar microstructures, characterized by stiff elements of one phase with high aspect ratio separated by thin layers of the second one, are considered. Such microstructure is proved to provide an efficient solution for the problem of a crack arrest. However, contrary to the case of a homogeneous material, an external pressure, applied to a part of the composite boundary, can cause significant tensile stresses which increase the danger of crack nucleation. Investigation of the influence of microstructure parameters on the magnitude of tensile stresses is performed by means of the classical Flamant-like problem of an orthotropic half-plane subjected to a normal external distributed loading. Adequate analysis of this problem represents a serious computational task due to the geometry of the considered layout and the high contrast between the composite constituents. This difficulty is presently circumvented by deriving a micro-to-macro analysis in the framework of which an analytical solution of the auxiliary elasticity problem, followed by the discrete Fourier transform and the higher-order theory are employed. As a result, full scale continuum modeling of both composite constituents without employing any simplifying assumptions is presented. In the framework of the present proposed modeling, the influence of stiff elements aspect ratio on the overall stress distribution is demonstrated.
NASA Technical Reports Server (NTRS)
Bathe, M.; Kamm, R. D.
1999-01-01
A new model is used to analyze the fully coupled problem of pulsatile blood flow through a compliant, axisymmetric stenotic artery using the finite element method. The model uses large displacement and large strain theory for the solid, and the full Navier-Stokes equations for the fluid. The effect of increasing area reduction on fluid dynamic and structural stresses is presented. Results show that pressure drop, peak wall shear stress, and maximum principal stress in the lesion all increase dramatically as the area reduction in the stenosis is increased from 51 to 89 percent. Further reductions in stenosis cross-sectional area, however, produce relatively little additional change in these parameters due to a concomitant reduction in flow rate caused by the losses in the constriction. Inner wall hoop stretch amplitude just distal to the stenosis also increases with increasing stenosis severity, as downstream pressures are reduced to a physiological minimum. The contraction of the artery distal to the stenosis generates a significant compressive stress on the downstream shoulder of the lesion. Dynamic narrowing of the stenosis is also seen, further augmenting area constriction at times of peak flow. Pressure drop results are found to compare well to an experimentally based theoretical curve, despite the assumption of laminar flow.
NASA Astrophysics Data System (ADS)
Gross, L.; Shaw, S.
2016-04-01
Mapping the horizontal distribution of permeability is a key problem for the coal seam gas industry. Poststack seismic data with anisotropy attributes provide estimates for fracture density and orientation which are then interpreted in terms of permeability. This approach delivers an indirect measure of permeability and can fail if other sources of anisotropy (for instance stress) come into play. Seismo-electric methods, based on recording the electric signal from pore fluid movements stimulated through a seismic wave, measure permeability directly. In this paper we use numerical simulations to demonstrate that the seismo-electric method is potentially suitable to map the horizontal distribution of permeability changes across coal seams. We propose the use of an amplitude to offset (AVO) analysis of the electrical signal in combination with poststack seismic data collected during the exploration phase. Recording of electrical signals from a simple seismic source can be closer to production planning and operations. The numerical model is based on a sonic wave propagation model under the low frequency, saturated media assumption and uses a coupled high order spectral element and low order finite element solver. We investigate the impact of seam thickness, coal seam layering, layering in the overburden and horizontal heterogeneity of permeability.
Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters
Wozniak, Christopher E.; Hughes, Kelly T.
2008-01-01
Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950
A validation study of a stochastic model of human interaction
NASA Astrophysics Data System (ADS)
Burchfield, Mitchel Talmadge
The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.
Paleogeodesy of the Southern Santa Cruz Mountains Frontal Thrusts, Silicon Valley, CA
NASA Astrophysics Data System (ADS)
Aron, F.; Johnstone, S. A.; Mavrommatis, A. P.; Sare, R.; Hilley, G. E.
2015-12-01
We present a method to infer long-term fault slip rate distributions using topography by coupling a three-dimensional elastic boundary element model with a geomorphic incision rule. In particular, we used a 10-m-resolution digital elevation model (DEM) to calculate channel steepness (ksn) throughout the actively deforming southern Santa Cruz Mountains in Central California. We then used these values with a power-law incision rule and the Poly3D code to estimate slip rates over seismogenic, kilometer-scale thrust faults accommodating differential uplift of the relief throughout geologic time. Implicit in such an analysis is the assumption that the topographic surface remains unchanged over time as rock is uplifted by slip on the underlying structures. The fault geometries within the area are defined based on surface mapping, as well as active and passive geophysical imaging. Fault elements are assumed to be traction-free in shear (i.e., frictionless), while opening along them is prohibited. The free parameters in the inversion include the components of the remote strain-rate tensor (ɛij) and the bedrock resistance to channel incision (K), which is allowed to vary according to the mapped distribution of geologic units exposed at the surface. The nonlinear components of the geomorphic model required the use of a Markov chain Monte Carlo method, which simulated the posterior density of the components of the remote strain-rate tensor and values of K for the different mapped geologic units. Interestingly, posterior probability distributions of ɛij and K fall well within the broad range of reported values, suggesting that the joint use of elastic boundary element and geomorphic models may have utility in estimating long-term fault slip-rate distributions. Given an adequate DEM, geologic mapping, and fault models, the proposed paleogeodetic method could be applied to other crustal faults with geological and morphological expressions of long-term uplift.
Evaluation of 2D shallow-water model for spillway flow with a complex geometry
USDA-ARS?s Scientific Manuscript database
Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...
Accommodating Missing Data in Mixture Models for Classification by Opinion-Changing Behavior.
ERIC Educational Resources Information Center
Hill, Jennifer L.
2001-01-01
Explored the assumptions implicit in models reflecting three different approaches to missing survey response data using opinion data collected from Swiss citizens at four time points over nearly 2 years. Results suggest that the latently ignorable model has the least restrictive structural assumptions. Discusses the idea of "durable…
Sri Bhashyam, Sumitra; Montibeller, Gilberto
2016-04-01
A key objective for policymakers and analysts dealing with terrorist threats is trying to predict the actions that malicious agents may take. A recent trend in counterterrorism risk analysis is to model the terrorists' judgments, as these will guide their choices of such actions. The standard assumptions in most of these models are that terrorists are fully rational, following all the normative desiderata required for rational choices, such as having a set of constant and ordered preferences, being able to perform a cost-benefit analysis of their alternatives, among many others. However, are such assumptions reasonable from a behavioral perspective? In this article, we analyze the types of assumptions made across various counterterrorism analytical models that represent malicious agents' judgments and discuss their suitability from a descriptive point of view. We then suggest how some of these assumptions could be modified to describe terrorists' preferences more accurately, by drawing knowledge from the fields of behavioral decision research, politics, philosophy of choice, public choice, and conflict management in terrorism. Such insight, we hope, might help make the assumptions of these models more behaviorally valid for counterterrorism risk analysis. © 2016 The Authors Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of The Wound Healing Society.
ERIC Educational Resources Information Center
Jang, Hyesuk
2014-01-01
This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…
Black-boxing and cause-effect power
Albantakis, Larissa; Tononi, Giulio
2018-01-01
Reductionism assumes that causation in the physical world occurs at the micro level, excluding the emergence of macro-level causation. We challenge this reductionist assumption by employing a principled, well-defined measure of intrinsic cause-effect power–integrated information (Φ), and showing that, according to this measure, it is possible for a macro level to “beat” the micro level. Simple systems were evaluated for Φ across different spatial and temporal scales by systematically considering all possible black boxes. These are macro elements that consist of one or more micro elements over one or more micro updates. Cause-effect power was evaluated based on the inputs and outputs of the black boxes, ignoring the internal micro elements that support their input-output function. We show how black-box elements can have more common inputs and outputs than the corresponding micro elements, revealing the emergence of high-order mechanisms and joint constraints that are not apparent at the micro level. As a consequence, a macro, black-box system can have higher Φ than its micro constituents by having more mechanisms (higher composition) that are more interconnected (higher integration). We also show that, for a given micro system, one can identify local maxima of Φ across several spatiotemporal scales. The framework is demonstrated on a simple biological system, the Boolean network model of the fission-yeast cell-cycle, for which we identify stable local maxima during the course of its simulated biological function. These local maxima correspond to macro levels of organization at which emergent cause-effect properties of physical systems come into focus, and provide a natural vantage point for scientific inquiries. PMID:29684020
ACCURATE MODELING OF X-RAY EXTINCTION BY INTERSTELLAR GRAINS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, John; Draine, B. T., E-mail: jah5@astro.princeton.edu, E-mail: draine@astro.princeton.edu
Interstellar abundance determinations from fits to X-ray absorption edges often rely on the incorrect assumption that scattering is insignificant and can be ignored. We show instead that scattering contributes significantly to the attenuation of X-rays for realistic dust grain size distributions and substantially modifies the spectrum near absorption edges of elements present in grains. The dust attenuation modules used in major X-ray spectral fitting programs do not take this into account. We show that the consequences of neglecting scattering on the determination of interstellar elemental abundances are modest; however, scattering (along with uncertainties in the grain size distribution) must bemore » taken into account when near-edge extinction fine structure is used to infer dust mineralogy. We advertise the benefits and accuracy of anomalous diffraction theory for both X-ray halo analysis and near edge absorption studies. We present an open source Fortran suite, General Geometry Anomalous Diffraction Theory (GGADT), that calculates X-ray absorption, scattering, and differential scattering cross sections for grains of arbitrary geometry and composition.« less
Finite-element modelling of thermal micracking in fresh and consolidated marbles
NASA Astrophysics Data System (ADS)
Weiss, T.; Fuller, E.; Siegesmund, S.
2003-04-01
The initial stage of marble weathering is supposed to be controlled by thermal microcracking. Due to the anisotropy of the thermal expansion coefficients of calcite, the main rock forming mineral in marble, stresses are caused which lead to thermally-induced microcracking, especially along the grain boundaries. The so-called "granular disintegration" is a frequent weathering phenomenon observed for marbles. The controlling parameters are the grain size, grain shape and grain orientation. We use a finite-element approach to constrain magnitude and directional dependence of thermal degradation. Therefore, different assumptions are validated including the fracture toughness of the grain boundaries, the effects of the grain-to-grain orientation and bulk lattice preferred orientation (here referred to as texture). The resulting thermal microcracking and bulk rock thermal expansion anisotropy are validated. It is evident that thermal degradation depends on the texture. Strongly textured marbles exhibit a clear directional dependence of thermal degradation and a smaller bulk thermal degradation than randomly oriented ones. The effect of different stone consolidants in the pore space of degraded marble is simulated and its influence on mechanical properties such as tensile strength are evaluated.
Role of the Z band in the mechanical properties of the heart.
Goldstein, M A; Schroeter, J P; Michael, L H
1991-05-01
In striated muscle the mechanism of contraction involves the cooperative movement of contractile and elastic components. This review emphasizes a structural approach that describes the cellular and extracellular components with known anatomical, biochemical, and physical properties that make them candidates for these contractile and elastic components. Classical models of contractile and elastic elements and their underlying assumptions are presented. Mechanical properties of cardiac and skeletal muscle are compared and contrasted and then related to ultrastructure. Information from these approaches leads to the conclusion that the Z band is essential for muscle contraction. Our review of Z band structure shows the Z band at the interface where extracellular components meet the cell surface. The Z band is also the interface from cell surface to myofibril, from extra-myofibrillar to myofibril, and finally from sarcomere to sarcomere. Our studies of Z band in defined physiologic states show that this lattice is an integral part of the contractile elements and can function as an elastic component. The Z band is a complex dynamic lattice uniquely suited to play several roles in muscle contraction.
Is herpes zoster vaccination likely to be cost-effective in Canada?
Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L
2014-05-30
To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.
The inverse niche model for food webs with parasites
Warren, Christopher P.; Pascual, Mercedes; Lafferty, Kevin D.; Kuris, Armand M.
2010-01-01
Although parasites represent an important component of ecosystems, few field and theoretical studies have addressed the structure of parasites in food webs. We evaluate the structure of parasitic links in an extensive salt marsh food web, with a new model distinguishing parasitic links from non-parasitic links among free-living species. The proposed model is an extension of the niche model for food web structure, motivated by the potential role of size (and related metabolic rates) in structuring food webs. The proposed extension captures several properties observed in the data, including patterns of clustering and nestedness, better than does a random model. By relaxing specific assumptions, we demonstrate that two essential elements of the proposed model are the similarity of a parasite's hosts and the increasing degree of parasite specialization, along a one-dimensional niche axis. Thus, inverting one of the basic rules of the original model, the one determining consumers' generality appears critical. Our results support the role of size as one of the organizing principles underlying niche space and food web topology. They also strengthen the evidence for the non-random structure of parasitic links in food webs and open the door to addressing questions concerning the consequences and origins of this structure.
VizieR Online Data Catalog: NuGrid stellar data set I. Yields from H to Bi (Pignatari+, 2016)
NASA Astrophysics Data System (ADS)
Pignatari, M.; Herwig, F.; Hirschi, R.; Bennett, M.; Rockefeller, G.; Fryer, C.; Timmes, F. X.; Ritter, C.; Heger, A.; Jones, S.; Battino, U.; Dotter, A.; Trappitsch, R.; Diehl, S.; Frischknecht, U.; Hungerford, A.; Magkotsios, G.; Travaglio, C.; Young, P.
2016-10-01
We provide a set of stellar evolution and nucleosynthesis calculations that applies established physics assumptions simultaneously to low- and intermediate-mass and massive star models. Our goal is to provide an internally consistent and comprehensive nuclear production and yield database for applications in areas such as presolar grain studies. Our non-rotating models assume convective boundary mixing (CBM) where it has been adopted before. We include 8 (12) initial masses for Z=0.01 (0.02). Models are followed either until the end of the asymptotic giant branch phase or the end of Si burning, complemented by simple analytic core-collapse supernova (SN) models with two options for fallback and shock velocities. The explosions show which pre-SN yields will most strongly be effected by the explosive nucleosynthesis. We discuss how these two explosion parameters impact the light elements and the s and p process. For low- and intermediate-mass models, our stellar yields from H to Bi include the effect of CBM at the He-intershell boundaries and the stellar evolution feedback of the mixing process that produces the 13C pocket. All post-processing nucleosynthesis calculations use the same nuclear reaction rate network and nuclear physics input. We provide a discussion of the nuclear production across the entire mass range organized by element group. The entirety of our stellar nucleosynthesis profile and time evolution output are available electronically, and tools to explore the data on the NuGrid VOspace hosted by the Canadian Astronomical Data Centre are introduced. (12 data files).
NASA Astrophysics Data System (ADS)
Pignatari, M.; Herwig, F.; Hirschi, R.; Bennett, M.; Rockefeller, G.; Fryer, C.; Timmes, F. X.; Ritter, C.; Heger, A.; Jones, S.; Battino, U.; Dotter, A.; Trappitsch, R.; Diehl, S.; Frischknecht, U.; Hungerford, A.; Magkotsios, G.; Travaglio, C.; Young, P.
2016-08-01
We provide a set of stellar evolution and nucleosynthesis calculations that applies established physics assumptions simultaneously to low- and intermediate-mass and massive star models. Our goal is to provide an internally consistent and comprehensive nuclear production and yield database for applications in areas such as presolar grain studies. Our non-rotating models assume convective boundary mixing (CBM) where it has been adopted before. We include 8 (12) initial masses for Z = 0.01 (0.02). Models are followed either until the end of the asymptotic giant branch phase or the end of Si burning, complemented by simple analytic core-collapse supernova (SN) models with two options for fallback and shock velocities. The explosions show which pre-SN yields will most strongly be effected by the explosive nucleosynthesis. We discuss how these two explosion parameters impact the light elements and the s and p process. For low- and intermediate-mass models, our stellar yields from H to Bi include the effect of CBM at the He-intershell boundaries and the stellar evolution feedback of the mixing process that produces the {}13{{C}} pocket. All post-processing nucleosynthesis calculations use the same nuclear reaction rate network and nuclear physics input. We provide a discussion of the nuclear production across the entire mass range organized by element group. The entirety of our stellar nucleosynthesis profile and time evolution output are available electronically, and tools to explore the data on the NuGrid VOspace hosted by the Canadian Astronomical Data Centre are introduced.
NASA Astrophysics Data System (ADS)
Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul
2010-01-01
Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.
NASA Astrophysics Data System (ADS)
Oral, Elif; Gélis, Céline; Bonilla, Luis Fabián; Delavaud, Elise
2017-12-01
Numerical modelling of seismic wave propagation, considering soil nonlinearity, has become a major topic in seismic hazard studies when strong shaking is involved under particular soil conditions. Indeed, when strong ground motion propagates in saturated soils, pore pressure is another important parameter to take into account when successive phases of contractive and dilatant soil behaviour are expected. Here, we model 1-D seismic wave propagation in linear and nonlinear media using the spectral element numerical method. The study uses a three-component (3C) nonlinear rheology and includes pore-pressure excess. The 1-D-3C model is used to study the 1987 Superstition Hills earthquake (ML 6.6), which was recorded at the Wildlife Refuge Liquefaction Array, USA. The data of this event present strong soil nonlinearity involving pore-pressure effects. The ground motion is numerically modelled for different assumptions on soil rheology and input motion (1C versus 3C), using the recorded borehole signals as input motion. The computed acceleration-time histories show low-frequency amplification and strong high-frequency damping due to the development of pore pressure in one of the soil layers. Furthermore, the soil is found to be more nonlinear and more dilatant under triaxial loading compared to the classical 1C analysis, and significant differences in surface displacements are observed between the 1C and 3C approaches. This study contributes to identify and understand the dominant phenomena occurring in superficial layers, depending on local soil properties and input motions, conditions relevant for site-specific studies.
Tay, Richard
2016-03-01
The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel
2017-10-01
The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.
Population Health in Canada: A Brief Critique
Coburn, David; Denny, Keith; Mykhalovskiy, Eric; McDonough, Peggy; Robertson, Ann; Love, Rhonda
2003-01-01
An internationally influential model of population health was developed in Canada in the 1990s, shifting the research agenda beyond health care to the social and economic determinants of health. While agreeing that health has important social determinants, the authors believe that this model has serious shortcomings; they critique the model by focusing on its hidden assumptions. Assumptions about how knowledge is produced and an implicit interest group perspective exclude the sociopolitical and class contexts that shape interest group power and citizen health. Overly rationalist assumptions about change understate the role of agency. The authors review the policy and practice implications of the Canadian population health model and point to alternative ways of viewing the determinants of health. PMID:12604479
Analyses of School Commuting Data for Exposure Modeling Purposes
Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...