Sample records for additional simplifying assumptions

  1. Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres

    NASA Astrophysics Data System (ADS)

    Cuomo, M.; dell'Isola, F.; Greco, L.

    2016-06-01

    Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.

  2. Preliminary methodology to assess the national and regional impact of U.S. wind energy development on birds and bats

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2015-01-01

    Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.

  3. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    NASA Astrophysics Data System (ADS)

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  4. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  5. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  6. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  7. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  8. Analyses of School Commuting Data for Exposure Modeling Purposes

    EPA Science Inventory

    Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...

  9. Investigations in a Simplified Bracketed Grid Approach to Metrical Structure

    ERIC Educational Resources Information Center

    Liu, Patrick Pei

    2010-01-01

    In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…

  10. Stirling Engine External Heat System Design with Heat Pipe Heater.

    DTIC Science & Technology

    1986-07-01

    Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the

  11. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  12. Quantum State Tomography via Reduced Density Matrices.

    PubMed

    Xin, Tao; Lu, Dawei; Klassen, Joel; Yu, Nengkun; Ji, Zhengfeng; Chen, Jianxin; Ma, Xian; Long, Guilu; Zeng, Bei; Laflamme, Raymond

    2017-01-13

    Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However, it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work, we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally, we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results demonstrate the advantages and possible pitfalls of quantum state tomography with local measurements.

  13. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  14. Pendulum Motion and Differential Equations

    ERIC Educational Resources Information Center

    Reid, Thomas F.; King, Stephen C.

    2009-01-01

    A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…

  15. On the coupling of fluid dynamics and electromagnetism at the top of the earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1985-01-01

    A kinematic approach to short-term geomagnetism has recently been based upon pre-Maxwell frozen-flux electromagnetism. A complete dynamic theory requires coupling fluid dynamics to electromagnetism. A geophysically plausible simplifying assumption for the vertical vorticity balance, namely that the vertical Lorentz torque is negligible, is introduced and its consequences are developed. The simplified coupled magnetohydrodynamic system is shown to conserve a variety of magnetic and vorticity flux integrals. These provide constraints on eligible models for the geomagnetic main field, its secular variation, and the horizontal fluid motions at the top of the core, and so permit a number of tests of the underlying assumptions.

  16. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  17. Using Heat Pulses for Quantifying 3d Seepage Velocity in Groundwater-Surface Water Interactions, Considering Source Size, Regime, and Dispersion

    NASA Astrophysics Data System (ADS)

    Zlotnik, V. A.; Tartakovsky, D. M.

    2017-12-01

    The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles

  18. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  19. From puddles to planet: modeling approaches to vector-borne diseases at varying resolution and scale.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L

    2015-08-01

    Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  1. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  2. Determination of mechanical loading components of the equine metacarpus from measurements of strain during walking.

    PubMed

    Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S

    2006-08-01

    The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.

  3. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  4. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  5. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  6. Polymer flammability

    DOT National Transportation Integrated Search

    2005-05-01

    This report provides an overview of polymer flammability from a material science perspective and describes currently accepted test methods to quantify burning behavior. Simplifying assumptions about the gas and condensed phase processes of flaming co...

  7. A practical method of predicting the loudness of complex electrical stimuli

    NASA Astrophysics Data System (ADS)

    McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.

    2003-04-01

    The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.

  8. Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals

    NASA Astrophysics Data System (ADS)

    Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.

    2016-11-01

    There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.

  9. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  10. Isotope and fast ions turbulence suppression effects: Consequences for high-β ITER plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Görler, T.; Jenko, F.

    2018-05-01

    The impact of isotope effects and fast ions on microturbulence is analyzed by means of non-linear gyrokinetic simulations for an ITER hybrid scenario at high beta obtained from previous integrated modelling simulations with simplified assumptions. Simulations show that ITER might work very close to threshold, and in these conditions, significant turbulence suppression is found from DD to DT plasmas. Electromagnetic effects are shown to play an important role in the onset of this isotope effect. Additionally, even external ExB flow shear, which is expected to be low in ITER, has a stronger impact on DT than on DD. The fast ions generated by fusion reactions can additionally reduce turbulence even more although the impact in ITER seems weaker than in present-day tokamaks.

  11. International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1

    DTIC Science & Technology

    1998-01-01

    pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made

  12. Rethinking Use of the OML Model in Electric Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  13. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  14. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  15. Quantum-like dynamics applied to cognition: a consideration of available options

    NASA Astrophysics Data System (ADS)

    Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.

    2017-10-01

    Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  16. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.

    2015-01-01

    Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on several real-world problems. Two assumptions are frequently made to simplify the modeling of an interplanetary high-thrust trajectory during the preliminary design phase. The first assumption is that because the available thrust is high, any maneuvers performed by the spacecraft can be modeled as discrete changes in velocity. This assumption removes the need to integrate the equations of motion governing the motion of a spacecraft under thrust and allows the change in velocity to be modeled as an impulse and the expenditure of propellant to be modeled using the time-independent solution to Tsiolkovsky's rocket equation [1]. The second assumption is that the spacecraft moves primarily under the influence of the central body, i.e. the sun, and all other perturbing forces may be neglected in preliminary design. The path of the spacecraft may then be modeled as a series of conic sections. When a spacecraft performs a close approach to a planet, the central body switches from the sun to that planet and the trajectory is modeled as a hyperbola with respect to the planet. This is known as the method of patched conics. The impulsive and patched-conic assumptions significantly simplify the preliminary design problem.

  17. Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics

    DTIC Science & Technology

    2012-09-13

    Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an

  18. Longitudinal stability in relation to the use of an automatic pilot

    NASA Technical Reports Server (NTRS)

    Klemin, Alexander; Pepper, Perry A; Wittner, Howard A

    1938-01-01

    The effect of restraint in pitching introduced by an automatic pilot upon the longitudinal stability of an airplane has been studied. Customary simplifying assumptions have been made in setting down the equations of motion, and the results of computations based on the simplified equations are presented to show the effect of an automatic pilot installed in an airplane of known dimensions and characteristics. The equations developed have been applied by making calculations for a Clark biplane and a Fairchild 22 monoplane.

  19. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.

  20. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  1. Measuring Spatial Infiltration in Stormwater Control Measures: Results and Implications

    EPA Science Inventory

    This presentation will provide background information on research conducted by EPA-ORD on the use of soil moisture sensors in bioretention/bioinfiltration technologies to evaluate infiltration mechanisms and compares monitoring results to simplified modeling assumptions. A serie...

  2. Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling

    EPA Science Inventory

    Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...

  3. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  4. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  5. Relating color working memory and color perception.

    PubMed

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. INTERNAL DOSE AND RESPONSE IN REAL-TIME.

    EPA Science Inventory

    Abstract: Rapid temporal fluctuations in exposure may occur in a number of situations such as accidents or other unexpected acute releases of airborne substances. Often risk assessments overlook temporal exposure patterns under simplifying assumptions such as the use of time-wei...

  7. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  8. Impact buckling of thin bars in the elastic range for any end condition

    NASA Technical Reports Server (NTRS)

    Taub, Josef

    1934-01-01

    Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.

  9. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  10. Simplifying the complexity of resistance heterogeneity in metastasis

    PubMed Central

    Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.

    2014-01-01

    The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979

  11. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  12. An approach to quantifying the efficiency of a Bayesian filter

    USDA-ARS?s Scientific Manuscript database

    Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...

  13. A Methodology for Developing Army Acquisition Strategies for an Uncertain Future

    DTIC Science & Technology

    2007-01-01

    manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are

  14. MODELING NITROGEN-CARBON CYCLING AND OXYGEN CONSUMPTION IN BOTTOM SEDIMENTS

    EPA Science Inventory

    A model framework is presented for simulating nitrogen and carbon cycling at the sediment–water interface, and predicting oxygen consumption by oxidation reactions inside the sediments. Based on conservation of mass and invoking simplifying assumptions, a coupled system of diffus...

  15. An alternative Biot's displacement formulation for porous materials.

    PubMed

    Dazel, Olivier; Brouard, Bruno; Depollier, Claude; Griffiths, Stéphane

    2007-06-01

    This paper proposes an alternative displacement formulation of Biot's linear model for poroelastic materials. Its advantage is a simplification of the formalism without making any additional assumptions. The main difference between the method proposed in this paper and the original one is the choice of the generalized coordinates. In the present approach, the generalized coordinates are chosen in order to simplify the expression of the strain energy, which is expressed as the sum of two decoupled terms. Hence, new equations of motion are obtained whose elastic forces are decoupled. The simplification of the formalism is extended to Biot and Willis thought experiments, and simpler expressions of the parameters of the three Biot waves are also provided. A rigorous derivation of equivalent and limp models is then proposed. It is finally shown that, for the particular case of sound-absorbing materials, additional simplifications of the formalism can be obtained.

  16. DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS

    EPA Science Inventory

    Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Johannes; Merle, Alexander; Totzauer, Maximilian

    We investigate the early Universe production of sterile neutrino Dark Matter by the decays of singlet scalars. All previous studies applied simplifying assumptions and/or studied the process only on the level of number densities, which makes it impossible to give statements about cosmic structure formation. We overcome these issues by dropping all simplifying assumptions (except for one we showed earlier to work perfectly) and by computing the full course of Dark Matter production on the level of non-thermal momentum distribution functions. We are thus in the position to study a broad range of aspects of the resulting settings and applymore » a broad set of bounds in a reliable manner. We have a particular focus on how to incorporate bounds from structure formation on the level of the linear power spectrum, since the simplistic estimate using the free-streaming horizon clearly fails for highly non-thermal distributions. Our work comprises the most detailed and comprehensive study of sterile neutrino Dark Matter production by scalar decays presented so far.« less

  18. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  19. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  20. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  1. Dynamic behaviour of thin composite plates for different boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com

    2014-12-10

    In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less

  2. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  3. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1988-01-01

    The previously constructed one dimensional model for the simulated operation of an iodine laser assumed that the perfluoroalkyl iodide gas n-C3F7I was incompressible. The present study removes this simplifying assumption and considers n-C3F7I as a compressible fluid.

  4. A simplified analytical solution for thermal response of a one-dimensional, steady state transpiration cooling system in radiative and convective environment

    NASA Technical Reports Server (NTRS)

    Kubota, H.

    1976-01-01

    A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.

  5. Unique Results and Lessons Learned from the TSS Missions

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. The OML equation for electron current collection by a positively biased body is simply: I is approximately equal to A x j(sub eo) x 2/v??(phi)(exp ½) where A is the area of the body and phi is the electric potential on the body with respect to the plasma. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  6. BASEFLOW SEPARATION BASED ON ANALYTICAL SOLUTIONS OF THE BOUSSINESQ EQUATION. (R824995)

    EPA Science Inventory

    Abstract

    A technique for baseflow separation is presented based on similarity solutions of the Boussinesq equation. The method makes use of the simplifying assumptions that a horizontal impermeable layer underlies a Dupuit aquifer which is drained by a fully penetratin...

  7. Quasi 3D modeling of water flow in vadose zone and groundwater

    USDA-ARS?s Scientific Manuscript database

    The complexity of subsurface flow systems calls for a variety of concepts leading to the multiplicity of simplified flow models. One habitual simplification is based on the assumption that lateral flow and transport in unsaturated zone are not significant unless the capillary fringe is involved. In ...

  8. The Role of Semantic Clustering in Optimal Memory Foraging

    ERIC Educational Resources Information Center

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T.

    2015-01-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in…

  9. Scaling the Library Collection; A Simplified Method for Weighing the Variables

    ERIC Educational Resources Information Center

    Vagianos, Louis

    1973-01-01

    On the assumption that the physical properties of any information stock (book, etc.) offer the best foundation on which to develop satisfactory measurements for assessing library operations and developing library procedures, weight is suggested as the most useful variable for assessment and standardization. Advantages of this approach are…

  10. Dualisms in Higher Education: A Critique of Their Influence and Effect

    ERIC Educational Resources Information Center

    Macfarlane, Bruce

    2015-01-01

    Dualisms pervade the language of higher education research providing an over-simplified roadmap to the field. However, the lazy logic of their popular appeal supports the perpetuation of erroneous and often outdated assumptions about the nature of modern higher education. This paper explores nine commonly occurring dualisms:…

  11. A Comprehensive Real-World Distillation Experiment

    ERIC Educational Resources Information Center

    Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.

    2015-01-01

    Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…

  12. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  13. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  14. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  15. Dynamically rich, yet parameter-sparse models for spatial epidemiology. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Jusup, Marko; Iwami, Shingo; Podobnik, Boris; Stanley, H. Eugene

    2015-12-01

    Since the very inception of mathematical modeling in epidemiology, scientists exploited the simplicity ingrained in the assumption of a well-mixed population. For example, perhaps the earliest susceptible-infectious-recovered (SIR) model developed by L. Reed and W.H. Frost in the 1920s [1], included the well-mixed assumption such that any two individuals in the population could meet each other. The problem was that, unlike many other simplifying assumptions used in epidemiological modeling whose validity holds in one situation or the other, well-mixed populations are almost non-existent in reality because the nature of human socio-economic interactions is, for the most part, highly heterogeneous (e.g. [2-6]).

  16. Quick and Easy Rate Equations for Multistep Reactions

    ERIC Educational Resources Information Center

    Savage, Phillip E.

    2008-01-01

    Students rarely see closed-form analytical rate equations derived from underlying chemical mechanisms that contain more than a few steps unless restrictive simplifying assumptions (e.g., existence of a rate-determining step) are made. Yet, work published decades ago allows closed-form analytical rate equations to be written quickly and easily for…

  17. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  18. Solubility and Thermodynamics: An Introductory Experiment

    NASA Astrophysics Data System (ADS)

    Silberman, Robert G.

    1996-05-01

    This article describes a laboratory experiment suitable for high school or freshman chemistry students in which the solubility of potassium nitrate is determined at several different temperatures. The data collected is used to calculate the equilibrium constant, delta G, delta H, and delta S for dissolution reaction. The simplifying assumptions are noted in the article.

  19. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  20. The Signal Importance of Noise

    ERIC Educational Resources Information Center

    Macy, Michael; Tsvetkova, Milena

    2015-01-01

    Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…

  1. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  2. Distinguishing Identical Particles and the Correct Counting of States

    ERIC Educational Resources Information Center

    de la Torre, A. C.; Martin, H. O.

    2009-01-01

    It is shown that quantum systems of identical particles can be treated as different when they are in well-differentiated states. This simplifying assumption allows for the consideration of quantum systems isolated from the rest of the universe and justifies many intuitive statements about identical systems. However, it is shown that this…

  3. Creating Matched Samples Using Exact Matching. Statistical Report 2016-3

    ERIC Educational Resources Information Center

    Godfrey, Kelly E.

    2016-01-01

    By creating and analyzing matched samples, researchers can simplify their analyses to include fewer covariate variables, relying less on model assumptions, and thus generating results that may be easier to report and interpret. When two groups essentially "look" the same, it is easier to explore their differences and make comparisons…

  4. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  5. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    NASA Astrophysics Data System (ADS)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.

  6. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  7. Elaboration Preferences and Differences in Learning Proficiency.

    ERIC Educational Resources Information Center

    Rohwer, William D., Jr.; Levin, Joel R.

    The major emphasis of this study is on the comparative validities of paired-associate learning tests and IQ tests in predicting reading achievement. The study engages in a brief review of earlier research in order to examine the validity of two assumptions--that the construction and/or the use of a tactic that simplifies a learning task is one of…

  8. 76 FR 58268 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-20

    ... simplify some assumptions and to make estimation methods consistent; and characterization as Agency burden...-1007 to (1) EPA online using http://www.regulations.gov (our preferred method), by e-mail to oppt.ncic...-HQ-OPPT-2010-1007, which is available for online viewing at http://www.regulations.gov , or in person...

  9. Test Review: Watson, G., & Glaser, E. M. (2010), "Watson-Glaser™ II Critical Thinking Appraisal." Washington State University, Pullman, USA

    ERIC Educational Resources Information Center

    Sternod, Latisha; French, Brian

    2016-01-01

    The Watson-Glaser™ II Critical Thinking Appraisal (Watson-Glaser II; Watson & Glaser, 2010) is a revised version of the "Watson-Glaser Critical Thinking Appraisal®" (Watson & Glaser, 1994). The Watson-Glaser II introduces a simplified model of critical thinking, consisting of three subdimensions: recognize assumptions, evaluate…

  10. Selected mesostructure properties in loblolly pine from Arkansas plantations

    Treesearch

    David E. Kretschmann; Steven M. Cramer; Roderic Lakes; Troy Schmidt

    2006-01-01

    Design properties of wood are currently established at the macroscale, assuming wood to be a homogeneous orthotropic material. The resulting variability from the use of such a simplified assumption has been handled by designing with lower percentile values and applying a number of factors to account for the wide statistical variation in properties. With managed...

  11. Estimation of effective population size in continuously distributed populations: There goes the neighborhood

    Treesearch

    M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples

    2013-01-01

    Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...

  12. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  13. A multigenerational effect of parental age on offspring size but not fitness in common duckweed (Lemna minor).

    PubMed

    Barks, P M; Laird, R A

    2016-04-01

    Classic theories on the evolution of senescence make the simplifying assumption that all offspring are of equal quality, so that demographic senescence only manifests through declining rates of survival or fecundity. However, there is now evidence that, in addition to declining rates of survival and fecundity, many organisms are subject to age-related declines in the quality of offspring produced (i.e. parental age effects). Recent modelling approaches allow for the incorporation of parental age effects into classic demographic analyses, assuming that such effects are limited to a single generation. Does this 'single-generation' assumption hold? To find out, we conducted a laboratory study with the aquatic plant Lemna minor, a species for which parental age effects have been demonstrated previously. We compared the size and fitness of 423 laboratory-cultured plants (asexually derived ramets) representing various birth orders, and ancestral 'birth-order genealogies'. We found that offspring size and fitness both declined with increasing 'immediate' birth order (i.e. birth order with respect to the immediate parent), but only offspring size was affected by ancestral birth order. Thus, the assumption that parental age effects on offspring fitness are limited to a single generation does in fact hold for L. minor. This result will guide theorists aiming to refine and generalize modelling approaches that incorporate parental age effects into evolutionary theory on senescence. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  14. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  15. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  16. Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols

    PubMed Central

    Nam, Junghyun; Kim, Moonseong

    2014-01-01

    We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863

  17. Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.

  18. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  19. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  20. Stratosphere circulation on tidally locked ExoEarths

    NASA Astrophysics Data System (ADS)

    Carone, L.; Keppens, R.; Decin, L.; Henning, Th.

    2018-02-01

    Stratosphere circulation is important to interpret abundances of photochemically produced compounds like ozone which we aim to observe to assess habitability of exoplanets. We thus investigate a tidally locked ExoEarth scenario for TRAPPIST-1b, TRAPPIST-1d, Proxima Centauri b and GJ 667 C f with a simplified 3D atmosphere model and for different stratospheric wind breaking assumptions.

  1. 26 CFR 1.417(a)(3)-1 - Required explanation of qualified joint and survivor annuity and qualified preretirement survivor...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... grouping rules of paragraph (c)(2)(iii) of this section. Separate charts are provided for ages 55, 60, and...) Simplified presentations permitted—(A) Grouping of certain optional forms. Two or more optional forms of... starting date, a reasonable assumption for the age of the participant's spouse, or, in the case of a...

  2. A nonlinear theory for elastic plates with application to characterizing paper properties

    Treesearch

    M. W. Johnson; Thomas J. Urbanik

    1984-03-01

    A theory of thin plates which is physically as well as kinematically nonlinear is, developed and used to characterize elastic material behavior for arbitrary stretching and bending deformations. It is developed from a few clearly defined assumptions and uses a unique treatment of strain energy. An effective strain concept is introduced to simplify the theory to a...

  3. Sequential Auctions with Partially Substitutable Goods

    NASA Astrophysics Data System (ADS)

    Vetsikas, Ioannis A.; Jennings, Nicholas R.

    In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.

  4. A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.

    2014-01-01

    Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.

  5. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  6. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  7. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  8. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  9. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE PAGES

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo; ...

    2018-05-17

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  10. Hypotheses of calculation of the water flow rate evaporated in a wet cooling tower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourillot, C.

    1983-08-01

    The method developed by Poppe at the University of Hannover to calculate the thermal performance of a wet cooling tower fill is presented. The formulation of Poppe is then validated using full-scale test data from a wet cooling tower at the power station at Neurath, Federal Republic of Germany. It is shown that the Poppe method predicts the evaporated water flow rate almost perfectly and the condensate content of the warm air with good accuracy over a wide range of ambient conditions. The simplifying assumptions of the Merkel theory are discussed, and the errors linked to these assumptions are systematicallymore » described, then illustrated with the test data.« less

  11. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  12. Can the discharge of a hyperconcentrated flow be estimated from paleoflood evidence?

    NASA Astrophysics Data System (ADS)

    Bodoque, Jose M.; Eguibar, Miguel A.; DíEz-Herrero, AndréS.; GutiéRrez-PéRez, Ignacio; RuíZ-Villanueva, Virginia

    2011-12-01

    Many flood events involving water and sediments have been characterized using classic hydraulics principles, assuming the existence of critical flow and many other simplifications. In this paper, hyperconcentrated flow discharge was evaluated by using paleoflood reconstructions (based on paleostage indicators [PSI]) combined with a detailed hydraulic analysis of the critical flow assumption. The exact location where this condition occurred was established by iteratively determining the corresponding cross section, so that specific energy is at a minimum. In addition, all of the factors and parameters involved in the process were assessed, especially those related to the momentum equation, existing shear stresses in the wetted perimeter, and nonhydrostatic and hydrostatic pressure distributions. The superelevation of the hyperconcentrated flow, due to the flow elevation curvature, was also estimated and calibrated with the PSI. The estimated peak discharge was established once the iterative process was unable to improve the fit between the simulated depth and the depth observed from the PSI. The methodological approach proposed here can be applied to other higher-gradient mountainous torrents with a similar geomorphic configuration to the one studied in this paper. Likewise, results have been derived with fewer uncertainties than those obtained from standard hydraulic approaches, whose simplifying assumptions have not been considered.

  13. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  14. Marginal Loss Calculations for the DCOPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldridge, Brent; O'Neill, Richard P.; Castillo, Andrea R.

    2016-12-05

    The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today,more » but will include sufficient detail to demonstrate a few points with regard to the handling of losses.« less

  15. A Mass Tracking Formulation for Bubbles in Incompressible Flow

    DTIC Science & Technology

    2012-10-14

    incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow

  16. Simplifying Causal Complexity: How Interactions between Modes of Causal Induction and Information Availability Lead to Heuristic-Driven Reasoning

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Tutwiler, M. Shane

    2014-01-01

    This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…

  17. Flux Jacobian Matrices For Equilibrium Real Gases

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  18. Structural Code Considerations for Solar Rooftop Installations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred

    2014-12-01

    Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less

  19. 77 FR 54482 - Allocation of Costs Under the Simplified Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...

  20. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.

  1. Integrodifferential formulations of the continuous-time random walk for solute transport subject to bimolecular A +B →0 reactions: From micro- to mesoscopic

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Berkowitz, Brian

    2015-03-01

    We develop continuous-time random walk (CTRW) equations governing the transport of two species that annihilate when in proximity to one another. In comparison with catalytic or spontaneous transformation reactions that have been previously considered in concert with CTRW, both species have spatially variant concentrations that require consideration. We develop two distinct formulations. The first treats transport and reaction microscopically, potentially capturing behavior at sharp fronts, but at the cost of being strongly nonlinear. The second, mesoscopic, formulation relies on a separation-of-scales technique we develop to separate microscopic-scale reaction and upscaled transport. This simplifies the governing equations and allows treatment of more general reaction dynamics, but requires stronger smoothness assumptions of the solution. The mesoscopic formulation is easily tractable using an existing solution from the literature (we also provide an alternative derivation), and the generalized master equation (GME) for particles undergoing A +B →0 reactions is presented. We show that this GME simplifies, under appropriate circumstances, to both the GME for the unreactive CTRW and to the advection-dispersion-reaction equation. An additional major contribution of this work is on the numerical side: to corroborate our development, we develop an indirect particle-tracking-partial-integro-differential-equation (PIDE) hybrid verification technique which could be applicable widely in reactive anomalous transport. Numerical simulations support the mesoscopic analysis.

  2. A practical iterative PID tuning method for mechanical systems using parameter chart

    NASA Astrophysics Data System (ADS)

    Kang, M.; Cheong, J.; Do, H. M.; Son, Y.; Niculescu, S.-I.

    2017-10-01

    In this paper, we propose a method of iterative proportional-integral-derivative parameter tuning for mechanical systems that possibly possess hidden mechanical resonances, using a parameter chart which visualises the closed-loop characteristics in a 2D parameter space. We employ a hypothetical assumption that the considered mechanical systems have their upper limit of the derivative feedback gain, from which the feasible region in the parameter chart becomes fairly reduced and thus the gain selection can be extremely simplified. Then, a two-directional parameter search is carried out within the feasible region in order to find the best set of parameters. Experimental results show the validity of the assumption used and the proposed parameter tuning method.

  3. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  4. Consistency tests for the extraction of the Boer-Mulders and Sivers functions

    NASA Astrophysics Data System (ADS)

    Christova, E.; Leader, E.; Stoilov, M.

    2018-03-01

    At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  5. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less

  7. Understanding the LIGO GW150914 event

    NASA Astrophysics Data System (ADS)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao

    2016-08-01

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.

  8. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  9. Some Basic Aspects of Magnetohydrodynamic Boundary-Layer Flows

    NASA Technical Reports Server (NTRS)

    Hess, Robert V.

    1959-01-01

    An appraisal is made of existing solutions of magnetohydrodynamic boundary-layer equations for stagnation flow and flat-plate flow, and some new solutions are given. Since an exact solution of the equations of magnetohydrodynamics requires complicated simultaneous treatment of the equations of fluid flow and of electromagnetism, certain simplifying assumptions are generally introduced. The full implications of these assumptions have not been brought out properly in several recent papers. It is shown in the present report that for the particular law of deformation which the magnetic lines are assumed to follow in these papers a magnet situated inside the missile nose would not be able to take up any drag forces; to do so it would have to be placed in the flow away from the nose. It is also shown that for the assumption that potential flow is maintained outside the boundary layer, the deformation of the magnetic lines is restricted to small values. The literature contains serious disagreements with regard to reductions in heat-transfer rates due to magnetic action at the nose of a missile, and these disagreements are shown to be mainly due to different interpretations of reentry conditions rather than more complicated effects. In the present paper the magnetohydrodynamic boundary-layer equation is also expressed in a simple form that is especially convenient for physical interpretation. This is done by adapting methods to magnetic forces which in the past have been used for forces due to gravitational or centrifugal action. The simplified approach is used to develop some new solutions of boundary-layer flow and to reinterpret certain solutions existing in the literature. An asymptotic boundary-layer solution representing a fixed velocity profile and shear is found. Special emphasis is put on estimating skin friction and heat-transfer rates.

  10. How to Decide on Modeling Details: Risk and Benefit Assessment.

    PubMed

    Özilgen, Mustafa

    Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.

  11. A cumulative energy demand indicator (CED), life cycle based, for industrial waste management decision making.

    PubMed

    Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba

    2013-12-01

    Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.

  12. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    NASA Astrophysics Data System (ADS)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that the simplified modeling approach using PWNP as a driving factor for the evaluation of N losses from drained agricultural catchments gave satisfactory results and we can propose this approach for a wider use.

  13. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  14. Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models

    USGS Publications Warehouse

    Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.

    2007-01-01

    each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.

  15. Prediction of the turbulent wake with second-order closure

    NASA Technical Reports Server (NTRS)

    Taulbee, D. B.; Lumley, J. L.

    1981-01-01

    A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.

  16. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  17. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  18. Effect of Selected Modeling Assumptions on Subsurface Radionuclide Transport Projections for the Potential Environmental Management Disposal Facility at Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Painter, Scott L.

    2016-06-28

    The Department of Energy’s Office of Environmental Management recently revised a Remedial Investigation/ Feasibility Study (RI/FS) that included an analysis of subsurface radionuclide transport at a potential new Environmental Management Disposal Facility (EMDF) in East Bear Creek Valley near Oak Ridge, Tennessee. The effect of three simplifying assumptions used in the RI/FS analyses are investigated using the same subsurface pathway conceptualization but with more flexible modeling tools. Neglect of vadose zone dispersion was found to be conservative or non-conservative, depending on the retarded travel time and the half-life. For a given equilibrium distribution coefficient, a relatively narrow range of half-lifemore » was identified for which neglect of vadose zone transport is non-conservative and radionuclide discharge into surface water is non-negligible. However, there are two additional conservative simplifications in the reference case that compensate for the non-conservative effect of neglecting vadose zone dispersion: the use of a steady infiltration rate and vadose zone velocity, and the way equilibrium sorption is used to represent transport in the fractured material of the saturated aquifer. With more realistic representations of all three processes, the RI/FS reference case was found to either provide a reasonably good approximation to the peak concentration or was significantly conservative (pessimistic) for all parameter combinations considered.« less

  19. Study of photon emission by electron capture during solar nuclei acceleration, 1: Temperature-dependent cross section for charge changing processes

    NASA Technical Reports Server (NTRS)

    Perez-Peraza, J.; Alvarez, M.; Laville, A.; Gallegos, A.

    1985-01-01

    The study of charge changing cross sections of fast ions colliding with matter provides the fundamental basis for the analysis of the charge states produced in such interactions. Given the high degree of complexity of the phenomena, there is no theoretical treatment able to give a comprehensive description. In fact, the involved processes are very dependent on the basic parameters of the projectile, such as velocity charge state, and atomic number, and on the target parameters, the physical state (molecular, atomic or ionized matter) and density. The target velocity, may have also incidence on the process, through the temperature of the traversed medium. In addition, multiple electron transfer in single collisions intrincates more the phenomena. Though, in simplified cases, such as protons moving through atomic hydrogen, considerable agreement has been obtained between theory and experiments However, in general the available theoretical approaches have only limited validity in restricted regions of the basic parameters. Since most measurements of charge changing cross sections are performed in atomic matter at ambient temperature, models are commonly based on the assumption of targets at rest, however at Astrophysical scales, temperature displays a wide range in atomic and ionized matter. Therefore, due to the lack of experimental data , an attempt is made here to quantify temperature dependent cross sections on basis to somewhat arbitrary, but physically reasonable assumptions.

  20. Design, dynamics and control of an Adaptive Singularity-Free Control Moment Gyroscope actuator for microspacecraft Attitude Determination and Control System

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sasi Prabhakaran

    Design, dynamics, control and implementation of a novel spacecraft attitude control actuator called the "Adaptive Singularity-free Control Moment Gyroscope" (ASCMG) is presented in this dissertation. In order to construct a comprehensive attitude dynamics model of a spacecraft with internal actuators, the dynamics of a spacecraft with an ASCMG, is obtained in the framework of geometric mechanics using the principles of variational mechanics. The resulting dynamics is general and complete model, as it relaxes the simplifying assumptions made in prior literature on Control Moment Gyroscopes (CMGs) and it also addresses the adaptive parameters in the dynamics formulation. The simplifying assumptions include perfect axisymmetry of the rotor and gimbal structures, perfect alignment of the centers of mass of the gimbal and the rotor etc. These set of simplifying assumptions imposed on the design and dynamics of CMGs leads to adverse effects on their performance and results in high manufacturing cost. The dynamics so obtained shows the complex nonlinear coupling between the internal degrees of freedom associated with an ASCMG and the spacecraft bus's attitude motion. By default, the general ASCMG cluster can function as a Variable Speed Control Moment Gyroscope, and reduced to function in CMG mode by spinning the rotor at constant speed, and it is shown that even when operated in CMG mode, the cluster can be free from kinematic singularities. This dynamics model is then extended to include the effects of multiple ASCMGs placed in the spacecraft bus, and sufficient conditions for non-singular ASCMG cluster configurations are obtained to operate the cluster both in VSCMG and CMG modes. The general dynamics model of the ASCMG is then reduced to that of conventional VSCMGs and CMGs by imposing the standard set of simplifying assumptions used in prior literature. The adverse effects of the simplifying assumptions that lead to the complexities in conventional CMG design, and how they lead to CMG singularities, are described. General ideas on control of the angular momentum of the spacecraft using changes in the momentum variables of a finite number of ASCMGs, are provided. Control schemes for agile and precise attitude maneuvers using ASCMG cluster in the absence of external torques and when the total angular momentum of the spacecraft is zero, is presented for both constant speed and variable speed modes. A Geometric Variational Integrator (GVI) that preserves the geometry of the state space and the conserved norm of the total angular momentum is constructed for numerical simulation and microcontroller implementation of the control scheme. The GVI is obtained by discretizing the Lagrangian of the rnultibody systems, in which the rigid body attitude is globally represented on the Lie group of rigid body rotations. Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on commercial smartphones and a bare minimum hardware prototype of an ASCMG using low cost COTS components is also described. A lightweight, dynamics model-free Variational Attitude Estimator (VAE) suitable for smartphone implementation is employed for attitude determination and the attitude control is performed by ASCMG actuators. The VAE scheme presented here is implemented and validated onboard an Unmanned Aerial Vehicle (UAV) platform and the real time performance is analyzed. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The mechatronics realization of the attitude determination through variational attitude estimation scheme and control implementation using ASCMG actuators are presented here. Experimental results of the attitude estimation (filtering) scheme using smartphone sensors as an Inertial Measurement Unit (IMU) on the Hardware In the Loop (HIL) simulator testbed are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at New Mexico State University, demonstrate the performance of this estimation scheme with the noisy raw data from the smartphone sensors. Keywords: Spacecraft, momentum exchange devices, control moment gyroscope, variational mechanics, geometric mechanics, variational integrators, attitude determination, attitude control, ADCS, estimation, ASCMG, VSCMG, cubesat, mechatronics, smartphone, Android, MEMS sensor, embedded programming, microcontroller, brushless DC drives, HIL simulation.

  1. Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Harding, D. D.

    1977-01-01

    The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.

  2. Fitness extraction and the conceptual foundations of political biology.

    PubMed

    Boari, Mircea

    2005-01-01

    In well known formulations, political science, classical and neoclassical economics, and political economy have recognized as foundational a human impulse toward self-preservation. To employ this concept, modern social-sciences theorists have made simplifying assumptions about human nature and have then built elaborately upon their more incisive simplifications. Advances in biology, including advances in evolutionary theory, notably inclusive-fitness theory, have for decades now encouraged the reconsideration of such assumptions and, more ambitiously, the reconciliation of the social and life sciences. I ask if this reconciliation is feasible and test a path to the unification of politics and biology, called here "political biology." Two new notions, "fitness extraction" and "fitness exchange," are defined, then differentiated from each other, and lastly contrasted to cooperative gaming, the putative essential element of economics.

  3. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  4. Characterizing dark matter at the LHC in Drell-Yan events

    NASA Astrophysics Data System (ADS)

    Capdevilla, Rodolfo M.; Delgado, Antonio; Martin, Adam; Raj, Nirmal

    2018-02-01

    Spectral features in LHC dileptonic events may signal radiative corrections coming from new degrees of freedom, notably dark matter and mediators. Using simplified models, and under a set of simplifying assumptions, we show how these features can reveal the fundamental properties of the dark sector, such as self-conjugation, spin and mass of dark matter, and the quantum numbers of the mediator. Distributions of both the invariant mass mℓℓ and the Collins-Soper scattering angle cos θCS are studied to pinpoint these properties. We derive constraints on the models from LHC measurements of mℓℓ and cos θCS, which are competitive with direct detection and jets+MET searches. We find that in certain scenarios the cos θCS spectrum provides the strongest bounds, underlining the importance of scattering angle measurements for nonresonant new physics.

  5. Understanding young stars - A history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stahler, S.W.

    1988-12-01

    The history of pre-main-sequence theory is briefly reviewed. The paper of Henyey et al. (1955) is seen as an important transitional work, one which abandoned previous simplifying assumptions yet failed to incorporate newer insights into the surface structure of late-type stars. The subsequent work of Hayashi and his contemporaries is outlined, with an emphasis on the underlying physical principles. Finally, the recent impact of protostar theory is discussed, and speculations are offered on future developments. 56 references.

  6. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  7. On numerical modeling of one-dimensional geothermal histories

    USGS Publications Warehouse

    Haugerud, R.A.

    1989-01-01

    Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.

  8. The Boltzmann equation in the difference formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szoke, Abraham; Brooks III, Eugene D.

    2015-05-06

    First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.

  9. Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model

    DTIC Science & Technology

    2009-08-01

    has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System

  10. Gas Diffusion in Fluids Containing Bubbles

    NASA Technical Reports Server (NTRS)

    Zak, M.; Weinberg, M. C.

    1982-01-01

    Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.

  11. Edemagenic gain and interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A

    2008-02-01

    Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.

  12. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  13. Dyadic Green's function of an eccentrically stratified sphere.

    PubMed

    Moneda, Angela P; Chrissoulidis, Dimitrios P

    2014-03-01

    The electric dyadic Green's function (dGf) of an eccentrically stratified sphere is built by use of the superposition principle, dyadic algebra, and the addition theorem of vector spherical harmonics. The end result of the analytical formulation is a set of linear equations for the unknown vector wave amplitudes of the dGf. The unknowns are calculated by truncation of the infinite sums and matrix inversion. The theory is exact, as no simplifying assumptions are required in any one of the analytical steps leading to the dGf, and it is general in the sense that any number, position, size, and electrical properties can be considered for the layers of the sphere. The point source can be placed outside of or in any lossless part of the sphere. Energy conservation, reciprocity, and other checks verify that the dGf is correct. A numerical application is made to a stratified sphere made of gold and glass, which operates as a lens.

  14. An infectious way to teach students about outbreaks.

    PubMed

    Cremin, Íde; Watson, Oliver; Heffernan, Alastair; Imai, Natsuko; Ahmed, Norin; Bivegete, Sandra; Kimani, Teresia; Kyriacou, Demetris; Mahadevan, Preveina; Mustafa, Rima; Pagoni, Panagiota; Sophiea, Marisa; Whittaker, Charlie; Beacroft, Leo; Riley, Steven; Fisher, Matthew C

    2018-06-01

    The study of infectious disease outbreaks is required to train today's epidemiologists. A typical way to introduce and explain key epidemiological concepts is through the analysis of a historical outbreak. There are, however, few training options that explicitly utilise real-time simulated stochastic outbreaks where the participants themselves comprise the dataset they subsequently analyse. In this paper, we present a teaching exercise in which an infectious disease outbreak is simulated over a five-day period and subsequently analysed. We iteratively developed the teaching exercise to offer additional insight into analysing an outbreak. An R package for visualisation, analysis and simulation of the outbreak data was developed to accompany the practical to reinforce learning outcomes. Computer simulations of the outbreak revealed deviations from observed dynamics, highlighting how simplifying assumptions conventionally made in mathematical models often differ from reality. Here we provide a pedagogical tool for others to use and adapt in their own settings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. The role of sugar-sweetened beverage consumption in adolescent obesity: a review of the literature.

    PubMed

    Harrington, Susan

    2008-02-01

    Soft drink consumption has increased by 300% in the past 20 years, and 56-85% of children in school consume at least one soft drink daily. The odds ratio of becoming obese among children increases 1.6 times for each additional can or glass of sugar-sweetened drink consumed beyond their usual daily intake of the beverage. Soft drinks currently constitute the leading source of added sugars in the diet and exceed the U.S. Department of Agriculture's recommended total sugar consumption for adolescents. With the increase in adolescent obesity and the concurrent increase in consumption of sugar-sweetened beverages (SSB), the assumption infers a relationship between the two variables. SSB, classified as high-glycemic index (GI) liquids, increase postprandial blood glucose levels and decrease insulin sensitivity. Additionally, high-GI drinks submit to a decreased satiety level and subsequent overeating. Low-GI beverages stimulate a delayed return of hunger, thereby prompting an increased flexibility in amounts and frequencies of servings. Single intervention manipulation, elimination, or marked reduction of SSB consumption may serve to decrease caloric intake, increase satiety levels, decrease tendencies towards insulin resistance, and simplify the process of weight management in this population.

  16. A Summary of Revisions Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.; Perry, Boyd, III; Silva, Walter A.; Newman, Brett

    2014-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees-of - freedom and allows for the calculation of various airplane responses due to a discrete one-minus- cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and additional output data so as to provide a more useful and precise tool for gust load analysis. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs is included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  17. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  18. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  19. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  20. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  1. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  2. Tax Subsidies for Employer-Sponsored Health Insurance: Updated Microsimulation Estimates and Sensitivity to Alternative Incidence Assumptions

    PubMed Central

    Miller, G Edward; Selden, Thomas M

    2013-01-01

    Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400

  3. SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012

    DTIC Science & Technology

    2012-01-01

    return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a

  4. Analysis of cavitation bubble dynamics in a liquid

    NASA Technical Reports Server (NTRS)

    Fontenot, L. L.; Lee, Y. C.

    1971-01-01

    General differential equations governing the dynamics of the cavitation bubbles in a liquid were derived. With the assumption of spherical symmetry the governing equations were simplified. Closed form solutions were obtained for simple cases, and numerical solutions were calculated for complicated ones. The growth and the collapse of the bubble were analyzed, oscillations of the bubbles were studied, and the stability of the cavitation bubbles were investigated. The results show that the cavitation bubbles are unstable, and the oscillation is not sinusoidal.

  5. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  6. Perfect gas effects in compressible rapid distortion theory

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.; Myers, M. R.

    1987-01-01

    The governing equations presented for small amplitude unsteady disturbances imposed on steady, compressible mean flows that are two-dimensional and nearly uniform have their basis in the perfect gas equations of state, and therefore generalize previous results based on tangent gas theory. While these equations are more complex, this complexity is required for adequate treatment of high frequency disturbances, especially when the base flow Mach number is large; under such circumstances, the simplifying assumptions of tangent gas theory are not applicable.

  7. The global strong solutions of Hasegawa-Mima-Charney-Obukhov equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hongjun; Zhu Anyou

    2005-08-01

    The quasigeostrophic model is a simplified geophysical fluid model at asymptotically high rotation rate or at small Rossby number. We consider the quasigeostrophic equation with no dissipation term which was obtained as an asymptotic model from the Euler equations with free surface under a quasigeostrophic velocity field assumption. It is called the Hasegawa-Mima-Charney-Obukhov equation, which also arises from plasmas theory. We use a priori estimates to get the global existence of strong solutions for an Hasegawa-Mima-Charney-Obukhov equation.

  8. Operationally efficient propulsion system study (OEPSS) data book. Volume 10; Air Augmented Rocket Afterburning

    NASA Technical Reports Server (NTRS)

    Farhangi, Shahram; Trent, Donnie (Editor)

    1992-01-01

    A study was directed towards assessing viability and effectiveness of an air augmented ejector/rocket. Successful thrust augmentation could potentially reduce a multi-stage vehicle to a single stage-to-orbit vehicle (SSTO) and, thereby, eliminate the associated ground support facility infrastructure and ground processing required by the eliminated stage. The results of this preliminary study indicate that an air augmented ejector/rocket propulsion system is viable. However, uncertainties resulting from simplified approach and assumptions must be resolved by further investigations.

  9. Impacts of the mixing state and chemical composition on the cloud condensation nuclei (CCN) activity in Beijing during winter, 2016

    NASA Astrophysics Data System (ADS)

    Ren, J.; Zhang, F.

    2017-12-01

    Abstract.Understanding aerosol chemical composition and mixing state on CCN activity in polluted urban area is crucial to determine NCCN accurately and thus to quantify aerosol indirect effects. Aerosol hrgroscopicity, size-resolved cloud condensation nuclei (CCN) concentration and chemical composition are measured under polluted and background conditions in Beijing based on the Air Pollution and Human Health (APHH) field campaign in winter 2016. The CCN number concentration (NCCN) is predicted by using κ-Köhler theory from the PNSD and five simplified of the mixing state and chemical composition. The assumption of EIS (sulfate, nitrate and SOA internally mixed, and POA and BC externally mixed with size-resolved chemical composition) shows the best closure to predict NCCN with the ratio of predicted to measured NCCN of 0.96-1.12 both in POL and BG conditions. Under BG conditions, IB (internal mixture with bulk chemical composition) scheme achieves the best CCN closure during any periods of a day. In polluted days, EIS and IS (internal mixture with size-resolved chemical composition) scheme may achieve better closure than IB scheme due to the heterogeneity in particles composition across different size. ES (external mixture with size-resolved chemical composition) and EB (external mixture with bulk chemical composition) scheme markedly underestimate the NCCN with the ratio of predicted to measured NCCN of 0.6-0.8. In addition, we note that assumptions of size-resolved composition (IS or ES) show very limited promotes by comparing with the assumptions of bulk composition (IB or EB), furthermore, the prediction becomes worse by using size-resolved assumption in clean days. The predicted NCCN during eve-rush periods shows the most sensitivity to the five different assumptions, with ratios of the predicted and measured NCCN ranging from 0.5 to 1.4, reflecting great impacts from evening traffic and cooking sources. The result from the sensitivity examination of predict NCCN to particles mixing state and organic volume fractions with the aging of organic particles suggests that the mixing state of particles plays a minor role when the κorg exceeds 0.1. Our study could provide new dataset to evaluate the CCN parameterization in models in those heavily polluted regions with large fraction of POA and BC.

  10. Notes on SAW Tag Interrogation Techniques

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2010-01-01

    We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.

  11. Simplifier: a web tool to eliminate redundant NGS contigs.

    PubMed

    Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur

    2012-01-01

    Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.

  12. Computation on collisionless steady-state plasma flow past a charged disk

    NASA Technical Reports Server (NTRS)

    Parker, L. W.

    1976-01-01

    A computer method is presented using the 'inside-out' approach, for predicting the structure of the disturbed zone near a moving body in space. The approach uses fewer simplifying assumptions than other available methods, and is applicable to large ranges of the values of body and plasma parameters. Two major advances concerning 3-dimensional bodies are that thermal motions of ions as well as of electrons are treated realistically by following their trajectories in the electric field, and the technique for achieving self-consistency is promising for very large bodies. Three sample solutions were obtained for a disk-shaped body, charged negatively to a potential 4kT/e. With ion Mach number 4, and equal ion and electron temperatures, the wakes of a relatively small body (radius 5 Debye lengths) and a relatively large body (radius 100 Debye lengths) both begin to fill up between 2 and 3 body radii downstream. For the large body there is in addition a potential well (about 6kT/e deep) behind the body. Increasing the ion Mach number to 8 for the large body causes the potential well to become wider and longer but not deeper. For the large body, the quasineutrality assumption is validated outside of a cone-shaped region in the very near wake. For the large as well as the small body, the disturbed zone behind the body extends transversely no more than 2 or 3 body radii, a result of significance for the design of spacecraft boom instrumentation.

  13. Fission product ion exchange between zeolite and a molten salt

    NASA Astrophysics Data System (ADS)

    Gougar, Mary Lou D.

    The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)

  14. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R; Cheung, A; Anderson, R

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less

  15. Snow Physics and Meltwater Hydrology of the SSiB Model Employed for Climate Simulation Studies with GEOS 2 GCM

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.

  16. On the Weyl anomaly of 4D conformal higher spins: a holographic approach

    NASA Astrophysics Data System (ADS)

    Acevedo, S.; Aros, R.; Bugini, F.; Diaz, D. E.

    2017-11-01

    We present a first attempt to derive the full (type-A and type-B) Weyl anomaly of four dimensional conformal higher spin (CHS) fields in a holographic way. We obtain the type-A and type-B Weyl anomaly coefficients for the whole family of 4D CHS fields from the one-loop effective action for massless higher spin (MHS) Fronsdal fields evaluated on a 5D bulk Poincaré-Einstein metric with an Einstein metric on its conformal boundary. To gain access to the type-B anomaly coefficient we assume, for practical reasons, a Lichnerowicz-type coupling of the bulk Fronsdal fields with the bulk background Weyl tensor. Remarkably enough, our holographic findings under this simplifying assumption are certainly not unknown: they match the results previously found on the boundary counterpart under the assumption of factorization of the CHS higher-derivative kinetic operator into Laplacians of "partially massless" higher spins on Einstein backgrounds.

  17. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  18. Nonlinear Curvature Expressions for Combined Flapwise Bending, Chordwise Bending, Torsion and Extension of Twisted Rotor Blades

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Kaza, K. R. V.

    1976-01-01

    The nonlinear curvature expressions for a twisted rotor blade or a beam undergoing transverse bending in two planes, torsion, and extension were developed. The curvature expressions were obtained using simple geometric considerations. The expressions were first developed in a general manner using the geometrical nonlinear theory of elasticity. These general nonlinear expressions were then systematically reduced to four levels of approximation by imposing various simplifying assumptions, and in each of these levels the second degree nonlinear expressions were given. The assumptions were carefully stated and their implications with respect to the nonlinear theory of elasticity as applied to beams were pointed out. The transformation matrices between the deformed and undeformed blade-fixed coordinates, which were needed in the development of the curvature expressions, were also given for three of the levels of approximation. The present curvature expressions and transformation matrices were compared with corresponding expressions existing in the literature.

  19. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  20. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  1. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  2. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  3. Application of the backstepping method to the prediction of increase or decrease of infected population.

    PubMed

    Kuniya, Toshikazu; Sano, Hideki

    2016-05-10

    In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.

  4. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  5. The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.

    I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.

  6. Predator-prey Encounter Rates in Turbulent Environments: Consequences of Inertia Effects and Finite Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecseli, H. L.; Trulsen, J.

    2009-10-08

    Experimental as well as theoretical studies have demonstrated that turbulence can play an important role for the biosphere in marine environments, in particular also by affecting prey-predator encounter rates. Reference models for the encounter rates rely on simplifying assumptions of predators and prey being described as point particles moving passively with the local flow velocity. Based on simple arguments that can be tested experimentally we propose corrections for the standard expression for the encounter rates, where now finite sizes and Stokes drag effects are included.

  7. Calculation of load distribution in stiffened cylindrical shells

    NASA Technical Reports Server (NTRS)

    Ebner, H; Koller, H

    1938-01-01

    Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.

  8. Orbital geocentric oddness. (French Title: Bizarreries orbitales géocentriques)

    NASA Astrophysics Data System (ADS)

    Bassinot, E.

    2013-09-01

    The purpose of this essay is to determine the geocentric path of our superior neighbour, the planet Mars called like the God of the war.In other words,the question is : seen from our blue planet, what is the orbit of the red one? Based upon three simplifying and justified assumptions,it is proved hereunder with a purely geometrical approach,that Mars describes a curve very close to the well known Pascal's snail. The loop shown by this curve explains easily the apparently erratic behaviour of Mars.

  9. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  10. Aerodynamic effects of nearly uniform slipstreams on thin wings in the transonic regime

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1980-01-01

    A simplified model is used to describe the interaction between a propeller slipstream and a wing in the transonic regime. The undisturbed slipstream boundary is assumed to coincide with an infinite circular cylinder. The undisturbed slipstream velocity is rotational and is a function of the radius only. In general, the velocity perturbation caused by introducing a wing into the slipstream is also rotational. By making small disturbance assumptions, however, the perturbation velocity becomes nearly potential, and an approximation for the flow is obtained by solving a potential equation.

  11. Interplanetary magnetic flux - Measurement and balance

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Gosling, J. T.; Phillips, J. L.

    1992-01-01

    A new method for determining the approximate amount of magnetic flux in various solar wind structures in the ecliptic (and solar rotation) plane is developed using single-spacecraft measurements in interplanetary space and making certain simplifying assumptions. The method removes the effect of solar wind velocity variations and can be applied to specific, limited-extent solar wind structures as well as to long-term variations. Over the 18-month interval studied, the ecliptic plane flux of coronal mass ejections was determined to be about 4 times greater than that of HFDs.

  12. A study of trends and techniques for space base electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    The use of dry processing and alternate dielectrics for processing wafers is reported. A two dimensional modeling program was written for the simulation of short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide-silicon interface. In solving current continuity equation, the program does not converge. However, solving the two dimensional Poisson equation for the potential distribution was achieved. The status of other 2D MOSFET simulation programs are summarized.

  13. The effect of the behavior of an average consumer on the public debt dynamics

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  14. Method of Moments Applied to the Analysis of Precision Spectra from the Neutron Time-of- flight Diagnostics at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hatarik, Robert; Caggiano, J. A.; Callahan, D.; Casey, D.; Clark, D.; Doeppner, T.; Eckart, M.; Field, J.; Frenje, J.; Gatu Johnson, M.; Grim, G.; Hartouni, E.; Hurricane, O.; Kilkenny, J.; Knauer, J.; Ma, T.; Mannion, O.; Munro, D.; Sayre, D.; Spears, B.

    2015-11-01

    The method of moments was introduced by Pearson as a process for estimating the population distributions from which a set of ``random variables'' are measured. These moments are compared with a parameterization of the distributions, or of the same quantities generated by simulations of the process. Most diagnostics processes extract scalar parameters depending on the moments of spectra derived from analytic solutions to the fusion rate, necessarily based on simplifying assumptions of the confined plasma. The precision of the TOF spectra, and the nature of the implosions at the NIF require the inclusion of factors beyond the traditional analysis and require the addition of higher order moments to describe the data. This talk will present a diagnostic process for extracting the moments of the neutron energy spectrum for a comparison with theoretical considerations as well as simulations of the implosions. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  15. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  16. On the conservation of the Jacobi integral in the post-Newtonian circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Dubeibe, F. L.; Lora-Clavijo, F. D.; González, Guillermo A.

    2017-05-01

    In the present paper, using the first-order approximation of the n-body Lagrangian (derived on the basis of the post-Newtonian gravitational theory of Einstein, Infeld, and Hoffman), we explicitly write down the equations of motion for the planar circular restricted three-body problem in the Solar system. Additionally, with some simplified assumptions, we obtain two formulas for estimating the values of the mass-distance and velocity-speed of light ratios appropriate for a given post-Newtonian approximation. We show that the formulas derived in the present study, lead to good numerical accuracy in the conservation of the Jacobi constant and almost allow for an equivalence between the Lagrangian and Hamiltonian approaches at the same post-Newtonian order. Accordingly, the dynamics of the system is analyzed in terms of the Poincaré sections method and Lyapunov exponents, finding that for specific values of the Jacobi constant the dynamics can be either chaotic or regular. Our results suggest that the chaoticity of the post-Newtonian system is slightly increased in comparison with its Newtonian counterpart.

  17. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  18. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  19. Assessment of railway wagon suspension characteristics

    NASA Astrophysics Data System (ADS)

    Soukup, Josef; Skočilas, Jan; Skočilasová, Blanka

    2017-05-01

    The article deals with assessment of railway wagon suspension characteristics. The essential characteristics of a suspension are represented by the stiffness constants of the equivalent springs and the eigen frequencies of the oscillating movements in reference to the main central inertia axes of a vehicle. The premise of the experimental determination of these characteristic is the knowledge of the gravity center position and the knowledge of the main central inertia moments of the vehicle frame. The vehicle frame performs the general spatial movement when the vehicle moves. An analysis of the frame movement generally arises from Euler's equations which are commonly used for the description of the spherical movement. This solution is difficult and it can be simplified by applying the specific assumptions. The eigen frequencies solutions and solutions of the suspension stiffness are presented in the article. The solutions are applied on the railway and road vehicles with the simplifying conditions. A new method which assessed the characteristics is described in the article.

  20. Testing a thermo-chemo-hydro-geomechanical model for gas hydrate-bearing sediments using triaxial compression laboratory experiments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Helmig, R.; Wohlmuth, B.

    2017-09-01

    Natural gas hydrates are considered a potential resource for gas production on industrial scales. Gas hydrates contribute to the strength and stiffness of the hydrate-bearing sediments. During gas production, the geomechanical stability of the sediment is compromised. Due to the potential geotechnical risks and process management issues, the mechanical behavior of the gas hydrate-bearing sediments needs to be carefully considered. In this study, we describe a coupling concept that simplifies the mathematical description of the complex interactions occurring during gas production by isolating the effects of sediment deformation and hydrate phase changes. Central to this coupling concept is the assumption that the soil grains form the load-bearing solid skeleton, while the gas hydrate enhances the mechanical properties of this skeleton. We focus on testing this coupling concept in capturing the overall impact of geomechanics on gas production behavior though numerical simulation of a high-pressure isotropic compression experiment combined with methane hydrate formation and dissociation. We consider a linear-elastic stress-strain relationship because it is uniquely defined and easy to calibrate. Since, in reality, the geomechanical response of the hydrate-bearing sediment is typically inelastic and is characterized by a significant shear-volumetric coupling, we control the experiment very carefully in order to keep the sample deformations small and well within the assumptions of poroelasticity. The closely coordinated experimental and numerical procedures enable us to validate the proposed simplified geomechanics-to-flow coupling, and set an important precursor toward enhancing our coupled hydro-geomechanical hydrate reservoir simulator with more suitable elastoplastic constitutive models.

  1. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    PubMed Central

    Morel, Yann G.; Favoretto, Fabio

    2017-01-01

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028

  2. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    PubMed

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  3. Electron-transporting small molecule/ o-xylene hybrid additives to boost the performance of simplified inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Qin, Dashan; Cao, Huan; Zhang, Jidong

    2017-05-01

    Electron-transporting small molecule bathophenanthroline (Bphen) together with o-xylene has been used as hybrid additives to improve the performance of simplified inverted polymer solar cells employing ITO alone as cathode and photoactive layer based on polymer [[2,6'-4,8-di(5-ethylhexylthienyl)benzo[1,2-b;3,3-b] dithiophene] [3-fluoro-2[(2-ethylhexyl)carbonyl]thieno[3,4-b]thiophenediyl

  4. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.

  5. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  6. Solubility of lovastatin in a family of six alcohols: Ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol.

    PubMed

    Nti-Gyabaah, J; Chmielowski, R; Chan, V; Chiew, Y C

    2008-07-09

    Accurate experimental determination of solubility of active pharmaceutical ingredients (APIs) in solvents and its correlation, for solubility prediction, is essential for rapid design and optimization of isolation, purification, and formulation processes in the pharmaceutical industry. An efficient material-conserving analytical method, with in-line reversed HPLC separation protocol, has been developed to measure equilibrium solubility of lovastatin in ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol between 279 and 313K. Fusion enthalpy DeltaH(fus), melting point temperature, Tm, and the differential molar heat capacity, DeltaC(P), were determined by differential scanning calorimetry (DSC) to be 43,136J/mol, 445.5K, and 255J/(molK), respectively. In order to use the regular solution equation, simplified assumptions have been made concerning DeltaC(P), specifically, DeltaC(P)=0, or DeltaC(P)=DeltaS. In this study, we examined the extent to which these assumptions influence the magnitude of the ideal solubility of lovastatin, and determined that both assumptions underestimate the ideal solubility of lovastatin. The solubility data was used with the calculated ideal solubility to obtain activity coefficients, which were then fitted to the van't Hoff-like regular solution equation. Examination of the plots indicated that both assumptions give erroneous excess enthalpy of solution, H(infinity), and hence thermodynamically inconsistent activity coefficients. The order of increasing ideality, or solubility of lovastatin was butanol>1-propanol>1-pentanol>1-hexanol>1-octanol.

  7. Assessment of dietary exposure in the French population to 13 selected food colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners.

    PubMed

    Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc

    2008-01-01

    The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).

  8. The Valuation of Scientific and Technical Experiments

    NASA Technical Reports Server (NTRS)

    Williams, F. E.

    1972-01-01

    Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.

  9. Uncertainty about fundamentals and herding behavior in the FOREX market

    NASA Astrophysics Data System (ADS)

    Kaltwasser, Pablo Rovira

    2010-03-01

    It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.

  10. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  11. Ferromagnetic effects for nanofluid venture through composite permeable stenosed arteries with different nanosize particles

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Mustafa, M. T.

    2015-07-01

    In the present article ferromagnetic field effects for copper nanoparticles for blood flow through composite permeable stenosed arteries is discussed. The copper nanoparticles for the blood flow with water as base fluid with different nanosize particles is not explored upto yet. The equations for the Cu-water nanofluid are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics are utilized.

  12. Thermal effectiveness of multiple shell and tube pass TEMA E heat exchangers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pignotti, A.; Tamborenea, P.I.

    1988-02-01

    The thermal effectiveness of a TEMAE shell-and-tube heat exchanger, with one shell pass and an arbitrary number of tube passes, is determined under the usual simplifying assumptions of perfect transverse mixing of the shell fluid, no phase change, and temperature independence of the heat capacity rates and the heat transfer coefficient. A purely algebraic solution is obtained for the effectiveness as a functions of the heat capacity rate ratio and the number of heat transfer units. The case with M shell passes and N tube passes is easily expressed in terms of the single-shell-pass case.

  13. Generalization of low pressure, gas-liquid, metastable sound speed to high pressures

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1981-01-01

    A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.

  14. A general numerical model for wave rotor analysis

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel W.

    1992-01-01

    Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

  15. Refracted arrival waves in a zone of silence from a finite thickness mixing layer.

    PubMed

    Suzuki, Takao; Lele, Sanjiva K

    2002-02-01

    Refracted arrival waves which propagate in the zone of silence of a finite thickness mixing layer are analyzed using geometrical acoustics in two dimensions. Here, two simplifying assumptions are made: (i) the mean flow field is transversely sheared, and (ii) the mean velocity and temperature profiles approach the free-stream conditions exponentially. Under these assumptions, ray trajectories are analytically solved, and a formula for acoustic pressure amplitude in the far field is derived in the high-frequency limit. This formula is compared with the existing theory based on a vortex sheet corresponding to the low-frequency limit. The analysis covers the dependence on the Mach number as well as on the temperature ratio. The results show that both limits have some qualitative similarities, but the amplitude in the zone of silence at high frequencies is proportional to omega(-1/2), while that at low frequencies is proportional to omega(-3/2), omega being the angular frequency of the source.

  16. Assessment of ecotoxicological risks related to depositing dredged materials from canals in northern France on soil.

    PubMed

    Perrodin, Yves; Babut, Marc; Bedell, Jean-Philippe; Bray, Marc; Clement, Bernard; Delolme, Cécile; Devaux, Alain; Durrieu, Claude; Garric, Jeanne; Montuelle, Bernard

    2006-08-01

    The implementation of an ecological risk assessment framework is presented for dredged material deposits on soil close to a canal and groundwater, and tested with sediment samples from canals in northern France. This framework includes two steps: a simplified risk assessment based on contaminant concentrations and a detailed risk assessment based on toxicity bioassays and column leaching tests. The tested framework includes three related assumptions: (a) effects on plants (Lolium perenne L.), (b) effects on aquatic organisms (Escherichia coli, Pseudokirchneriella subcapitata, Ceriodaphnia dubia, and Xenopus laevis) and (c) effects on groundwater contamination. Several exposure conditions were tested using standardised bioassays. According to the specific dredged material tested, the three assumptions were more or less discriminatory, soil and groundwater pollution being the most sensitive. Several aspects of the assessment procedure must now be improved, in particular assessment endpoint design for risks to ecosystems (e.g., integration of pollutant bioaccumulation), bioassay protocols and column leaching test design.

  17. Tests for the extraction of Boer-Mulders functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot; Stoilov, Michail

    2017-12-01

    At present, the Boer-Mulders (BM) functions are extracted from asymmetry data using the simplifying assumption of their proportionality to the Sivers functions for each quark flavour. Here we present two independent tests for this assumption. We subject COMPASS data on semi-inclusive deep inelastic scattering on the 〈cos ϕh 〉, 〈cos 2ϕh 〉 and Sivers asymmetries to these tests. Our analysis shows that the tests are satisfied with the available data if the proportionality constant is the same for all quark flavours, which does not correspond to the flavour dependence used in existing analyses. This suggests that the published information on the BM functions may be unreliable. The 〈cos ϕh 〉, 〈cos 2ϕh 〉 asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  18. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Wärme und Feuchte instationär Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  19. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Warme und Feuchte instationar Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  20. Calculation of wall effects of flow on a perforated wall with a code of surface singularities

    NASA Astrophysics Data System (ADS)

    Piat, J. F.

    1994-07-01

    Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).

  1. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  2. Stability analysis of shallow wake flows

    NASA Astrophysics Data System (ADS)

    Kolyshkin, A. A.; Ghidaoui, M. S.

    2003-11-01

    Experimentally observed periodic structures in shallow (i.e. bounded) wake flows are believed to appear as a result of hydrodynamic instability. Previously published studies used linear stability analysis under the rigid-lid assumption to investigate the onset of instability of wakes in shallow water flows. The objectives of this paper are: (i) to provide a preliminary assessment of the accuracy of the rigid-lid assumption; (ii) to investigate the influence of the shape of the base flow profile on the stability characteristics; (iii) to formulate the weakly nonlinear stability problem for shallow wake flows and show that the evolution of the instability is governed by the Ginzburg Landau equation; and (iv) to establish the connection between weakly nonlinear analysis and the observed flow patterns in shallow wake flows which are reported in the literature. It is found that the relative error in determining the critical value of the shallow wake stability parameter induced by the rigid-lid assumption is below 10% for the practical range of Froude number. In addition, it is shown that the shape of the velocity profile has a large influence on the stability characteristics of shallow wakes. Starting from the rigid-lid shallow-water equations and using the method of multiple scales, an amplitude evolution equation for the most unstable mode is derived. The resulting equation has complex coefficients and is of Ginzburg Landau type. An example calculation of the complex coefficients of the Ginzburg Landau equation confirms the existence of a finite equilibrium amplitude, where the unstable mode evolves with time into a limit-cycle oscillation. This is consistent with flow patterns observed by Ingram & Chu (1987), Chen & Jirka (1995), Balachandar et al. (1999), and Balachandar & Tachie (2001). Reasonable agreement is found between the saturation amplitude obtained from the Ginzburg Landau equation under some simplifying assumptions and the numerical data of Grubi[sbreve]ic et al. (1995). Such consistency provides further evidence that experimentally observed structures in shallow wake flows may be described by the nonlinear Ginzburg Landau equation. Previous works have found similar consistency between the Ginzburg Landau model and experimental data for the case of deep (i.e. unbounded) wake flows. However, it must be emphasized that much more information is required to confirm the appropriateness of the Ginzburg Landau equation in describing shallow wake flows.

  3. Accounting for age structure and spatial structure in eco-evolutionary analyses of a large, mobile vertebrate.

    PubMed

    Waples, Robin S; Scribner, Kim; Moore, Jennifer; Draheim, Hope; Etter, Dwayne; Boersen, Mark

    2018-04-14

    The idealized concept of a population is integral to ecology, evolutionary biology, and natural resource management. To make analyses tractable, most models adopt simplifying assumptions, which almost inevitably are violated by real species in nature. Here we focus on both demographic and genetic estimates of effective population size per generation (Ne), the effective number of breeders per year (Nb), and Wright's neighborhood size (NS) for black bears (Ursus americanus) that are continuously distributed in the northern lower peninsula of Michigan, USA. We illustrate practical application of recently-developed methods to account for violations of two common, simplifying assumptions about populations: 1) reproduction occurs in discrete generations, and 2) mating occurs randomly among all individuals. We use a 9-year harvest dataset of >3300 individuals, together with genetic determination of 221 parent-offspring pairs, to estimate male and female vital rates, including age-specific survival, age-specific fecundity, and age-specific variance in fecundity (for which empirical data are rare). We find strong evidence for overdispersed variance in reproductive success of same-age individuals in both sexes, and we show that constraints on litter size have a strong influence on results. We also estimate that another life-history trait that is often ignored (skip breeding by females) has a relatively modest influence, reducing Nb by 9% and increasing Ne by 3%. We conclude that isolation by distance depresses genetic estimates of Nb, which implicitly assume a randomly-mating population. Estimated demographic NS (100, based on parent-offspring dispersal) was similar to genetic NS (85, based on regression of genetic distance and geographic distance), indicating that the >36,000 km2 study area includes about 4-5 black-bear neighborhoods. Results from this expansive data set provide important insight into effects of violating assumptions when estimating evolutionary parameters for long-lived, free-ranging species. In conjunction with recently-developed analytical methodology, the ready availability of non-lethal DNA sampling methods and the ability to rapidly and cheaply survey many thousands of molecular markers should facilitate eco-evolutionary studies like this for many more species in nature.

  4. Control-oriented modeling and adaptive backstepping control for a nonminimum phase hypersonic vehicle.

    PubMed

    Ye, Linqi; Zong, Qun; Tian, Bailing; Zhang, Xiuyun; Wang, Fang

    2017-09-01

    In this paper, the nonminimum phase problem of a flexible hypersonic vehicle is investigated. The main challenge of nonminimum phase is the prevention of dynamic inversion methods to nonlinear control design. To solve this problem, we make research on the relationship between nonminimum phase and backstepping control, finding that a stable nonlinear controller can be obtained by changing the control loop on the basis of backstepping control. By extending the control loop to cover the internal dynamics in it, the internal states are directly controlled by the inputs and simultaneously serve as virtual control for the external states, making it possible to guarantee output tracking as well as internal stability. Then, based on the extended control loop, a simplified control-oriented model is developed to enable the applicability of adaptive backstepping method. It simplifies the design process and releases some limitations caused by direct use of the no simplified control-oriented model. Next, under proper assumptions, asymptotic stability is proved for constant commands, while bounded stability is proved for varying commands. The proposed method is compared with approximate backstepping control and dynamic surface control and is shown to have superior tracking accuracy as well as robustness from the simulation results. This paper may also provide a beneficial guidance for control design of other complex systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu

    2001-07-01

    A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.

  6. A mathematics for medicine: The Network Effect

    PubMed Central

    West, Bruce J.

    2014-01-01

    The theory of medicine and its complement systems biology are intended to explain the workings of the large number of mutually interdependent complex physiologic networks in the human body and to apply that understanding to maintaining the functions for which nature designed them. Therefore, when what had originally been made as a simplifying assumption or a working hypothesis becomes foundational to understanding the operation of physiologic networks it is in the best interests of science to replace or at least update that assumption. The replacement process requires, among other things, an evaluation of how the new hypothesis affects modern day understanding of medical science. This paper identifies linear dynamics and Normal statistics as being such arcane assumptions and explores some implications of their retirement. Specifically we explore replacing Normal with fractal statistics and examine how the latter are related to non-linear dynamics and chaos theory. The observed ubiquity of inverse power laws in physiology entails the need for a new calculus, one that describes the dynamics of fractional phenomena and captures the fractal properties of the statistics of physiological time series. We identify these properties as a necessary consequence of the complexity resulting from the network dynamics and refer to them collectively as The Network Effect. PMID:25538622

  7. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  8. Experimental quantification of the fluid dynamics in blood-processing devices through 4D-flow imaging: A pilot study on a real oxygenator/heat-exchanger module.

    PubMed

    Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto

    2018-02-08

    The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Study on low intensity aeration oxygenation model and optimization for shallow water

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi

    2018-02-01

    Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.

  10. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  12. Simplified failure sequence evaluation of reactor pressure vessel head corroding in-core instrumentation assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McVicker, J.P.; Conner, J.T.; Hasrouni, P.N.

    1995-11-01

    In-Core Instrumentation (ICI) assemblies located on a Reactor Pressure Vessel Head have a history of boric acid leakage. The acid tends to corrode the nuts and studs which fasten the flanges of the assembly, thereby compromising the assembly`s structural integrity. This paper provides a simplified practical approach in determining the likelihood of an undetected progressing assembly stud deterioration, which would lead to a catastrophic loss of reactor coolant. The structural behavior of the In-Core Instrumentation flanged assembly is modeled using an elastic composite section assumption, with the studs transmitting tension and the pressure sealing gasket experiencing compression. Using the abovemore » technique, one can calculate the flange relative deflection and the consequential coolant loss flow rate, as well as the stress in any stud. A solved real life example develops the expected failure sequence and discusses the exigency of leak detection for safe shutdown. In the particular case of Calvert Cliffs Nuclear Power Plant (CCNPP) it is concluded that leak detection occurs before catastrophic failure of the ICI flange assembly.« less

  13. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  14. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  15. Steady flow model user's guide

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.

    1984-07-01

    Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.

  16. A simplified building airflow model for agent concentration prediction.

    PubMed

    Jacques, David R; Smith, David A

    2010-11-01

    A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.

  17. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  18. Computer algorithm for analyzing and processing borehole strainmeter data

    USGS Publications Warehouse

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  19. Design and validation of diffusion MRI models of white matter

    NASA Astrophysics Data System (ADS)

    Jelescu, Ileana O.; Budde, Matthew D.

    2017-11-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus.

  20. A comprehensive analysis of the evaporation of a liquid spherical drop.

    PubMed

    Sobac, B; Talbot, P; Haut, B; Rednikov, A; Colinet, P

    2015-01-15

    In this paper, a new comprehensive analysis of a suspended drop of a pure liquid evaporating into air is presented. Based on mass and energy conservation equations, a quasi-steady model is developed including diffusive and convective transports, and considering the non-isothermia of the gas phase. The main original feature of this simple analytical model lies in the consideration of the local dependence of the physico-chemical properties of the gas on the gas temperature, which has a significant influence on the evaporation process at high temperatures. The influence of the atmospheric conditions on the interfacial evaporation flux, molar fraction and temperature is investigated. Simplified versions of the model are developed to highlight the key mechanisms governing the evaporation process. For the conditions considered in this work, the convective transport appears to be opposed to the evaporation process leading to a decrease of the evaporation flux. However, this effect is relatively limited, the Péclet numbers happening to be small. In addition, the gas isothermia assumption never appears to be valid here, even at room temperature, due to the large temperature gradient that develops in the gas phase. These two conclusions are explained by the fact that heat transfer from the gas to the liquid appears to be the step limiting the evaporation process. Regardless of the complexity of the developed model, yet excluding extremely small droplets, the square of the drop radius decreases linearly over time (R(2) law). The assumptions of the model are rigorously discussed and general criteria are established, independently of the liquid-gas couple considered. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A New Formulation of Time Domain Boundary Integral Equation for Acoustic Wave Scattering in the Presence of a Uniform Mean Flow

    NASA Technical Reports Server (NTRS)

    Hu, Fang; Pizzo, Michelle E.; Nark, Douglas M.

    2017-01-01

    It has been well-known that under the assumption of a constant uniform mean flow, the acoustic wave propagation equation can be formulated as a boundary integral equation, in both the time domain and the frequency domain. Compared with solving partial differential equations, numerical methods based on the boundary integral equation have the advantage of a reduced spatial dimension and, hence, requiring only a surface mesh. However, the constant uniform mean flow assumption, while convenient for formulating the integral equation, does not satisfy the solid wall boundary condition wherever the body surface is not aligned with the uniform mean flow. In this paper, we argue that the proper boundary condition for the acoustic wave should not have its normal velocity be zero everywhere on the solid surfaces, as has been applied in the literature. A careful study of the acoustic energy conservation equation is presented that shows such a boundary condition in fact leads to erroneous source or sink points on solid surfaces not aligned with the mean flow. A new solid wall boundary condition is proposed that conserves the acoustic energy and a new time domain boundary integral equation is derived. In addition to conserving the acoustic energy, another significant advantage of the new equation is that it is considerably simpler than previous formulations. In particular, tangential derivatives of the solution on the solid surfaces are no longer needed in the new formulation, which greatly simplifies numerical implementation. Furthermore, stabilization of the new integral equation by Burton-Miller type reformulation is presented. The stability of the new formulation is studied theoretically as well as numerically by an eigenvalue analysis. Numerical solutions are also presented that demonstrate the stability of the new formulation.

  2. Design and validation of diffusion MRI models of white matter

    PubMed Central

    Jelescu, Ileana O.; Budde, Matthew D.

    2018-01-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus. PMID:29755979

  3. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  4. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs

    PubMed Central

    Baio, Gianluca

    2014-01-01

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24343868

  5. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    PubMed

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  6. Deformations of a pre-stretched and lubricated finite elastic membrane driven by non-uniform external forcing

    NASA Astrophysics Data System (ADS)

    Boyko, Evgeniy; Gat, Amir; Bercovici, Moran

    2017-11-01

    We study viscous-elastic dynamics of a fluid confined between a rigid plate and a finite pre-stretched circular elastic membrane, pinned at its boundaries. The membrane is subjected to forces acting either directly on the membrane or through a pressure distribution in the fluid. Under the assumptions of strong pre-stretching and small deformations of the elastic sheet, and by applying the lubrication approximation for the flow, we derive the Green's function for the resulting linearized 4th order diffusion equation governing the deformation field in cylindrical coordinates. In addition, defining an asymptotic expansion with the ratio of the induced to prescribed tension serving as the small parameter, we reduce the coupled Reynolds and non-linear von-Karman equations to a set of three one-way coupled linear equations. The solutions to these equations provide insight onto the effects of induced tension, and enable simplified prediction of the correction for the deformation field. Funded by the European Research Council (ERC) under the European Union'sHorizon 2020 Research and Innovation Programme, Grant Agreement No. 678734 (MetamorphChip). E.B. is supported by the Adams Fellowship Program.

  7. Stress intensity factors in two bonded elastic layers containing cracks perpendicular to and on the interface. Part 1: Analysis

    NASA Technical Reports Server (NTRS)

    Lu, M. C.; Erdogan, F.

    1980-01-01

    The basic crack problem which is essential for the study of subcritical crack propagation and fracture of layered structural materials is considered. Because of the apparent analytical difficulties, the problem is idealized as one of plane strain or plane stress. An additional simplifying assumption is made by restricting the formulation of the problem to crack geometries and loading conditions which have a plane of symmetry perpendicular to the interface. The general problem is formulated in terms of a coupled system of four integral equations. For each relevant crack configuration of practical interest, the singular behavior of the solution near and at the ends and points of intersection of the cracks is investigated and the related characteristic equations are obtained. The edge crack terminating at and crossing the interface, the T-shaped crack consisting of a broken layer and a delamination crack, the cross-shaped crack which consists of a delamination crack intersecting a crack which is perpendicular to the interface, and a delamination crack initiating from a stress-free boundary of the bonded layers are some of the practical crack geometries considered.

  8. Theory of rotational transition in atom-diatom chemical reaction

    NASA Astrophysics Data System (ADS)

    Nakamura, Masato; Nakamura, Hiroki

    1989-05-01

    Rotational transition in atom-diatom chemical reaction is theoretically studied. A new approximate theory (which we call IOS-DW approximation) is proposed on the basis of the physical idea that rotational transition in reaction is induced by the following two different mechanisms: rotationally inelastic half collision in both initial and final arrangement channels, and coordinate transformation in the reaction zone. This theory gives a fairy compact expression for the state-to-state transition probability. Introducing the additional physically reasonable assumption that reaction (particle rearrangement) takes place in a spatially localized region, we have reduced this expression into a simpler analytical form which can explicitly give overall rotational state distribution in reaction. Numerical application was made to the H+H2 reaction and demonstrated its effectiveness for the simplicity. A further simplified most naive approximation, i.e., independent events approximation was also proposed and demonstrated to work well in the test calculation of H+H2. The overall rotational state distribution is expressed simply by a product sum of the transition probabilities for the three consecutive processes in reaction: inelastic transition in the initial half collision, transition due to particle rearrangement, and inelastic transition in the final half collision.

  9. Exact Solutions for Wind-Driven Coastal Upwelling and Downwelling over Sloping Topography

    NASA Astrophysics Data System (ADS)

    Choboter, P.; Duke, D.; Horton, J.; Sinz, P.

    2009-12-01

    The dynamics of wind-driven coastal upwelling and downwelling are studied using a simplified dynamical model. Exact solutions are examined as a function of time and over a family of sloping topographies. Assumptions in the two-dimensional model include a frictionless ocean interior below the surface Ekman layer, and no alongshore dependence of the variables; however, dependence in the cross-shore and vertical directions is retained. Additionally, density and alongshore momentum are advected by the cross-shore velocity in order to maintain thermal wind. The time-dependent initial-value problem is solved with constant initial stratification and no initial alongshore flow. An alongshore pressure gradient is added to allow the cross-shore flow to be geostrophically balanced far from shore. Previously, this model has been used to study upwelling over flat-bottom and sloping topographies, but the novel feature in this work is the discovery of exact solutions for downwelling. These exact solutions are compared to numerical solutions from a primitive-equation ocean model, based on the Princeton Ocean Model, configured in a similar two-dimensional geometry. Many typical features of the evolution of density and velocity during downwelling are displayed by the analytical model.

  10. Of Lice and Math: Using Models to Understand and Control Populations of Head Lice

    PubMed Central

    Laguna, Mara Fabiana; Risau-Gusman, Sebastián

    2011-01-01

    In this paper we use detailed data about the biology of the head louse (pediculus humanus capitis) to build a model of the evolution of head lice colonies. Using theory and computer simulations, we show that the model can be used to assess the impact of the various strategies usually applied to eradicate head lice, both conscious (treatments) and unconscious (grooming). In the case of treatments, we study the difference in performance that arises when they are applied in systematic and non-systematic ways. Using some reasonable simplifying assumptions (as random mixing of human groups and the same mobility for all life stages of head lice other than eggs) we model the contagion of pediculosis using only one additional parameter. It is shown that this parameter can be tuned to obtain collective infestations whose characteristics are compatible with what is given in the literature on real infestations. We analyze two scenarios: One where group members begin treatment when a similar number of lice are present in each head, and another where there is one individual who starts treatment with a much larger threshold (“superspreader”). For both cases we assess the impact of several collective strategies of treatment. PMID:21799752

  11. Numerical model for the thermal behavior of thermocline storage tanks

    NASA Astrophysics Data System (ADS)

    Ehtiwesh, Ismael A. S.; Sousa, Antonio C. M.

    2018-03-01

    Energy storage is a critical factor in the advancement of solar thermal power systems for the sustained delivery of electricity. In addition, the incorporation of thermal energy storage into the operation of concentrated solar power systems (CSPs) offers the potential of delivering electricity without fossil-fuel backup even during peak demand, independent of weather conditions and daylight. Despite this potential, some areas of the design and performance of thermocline systems still require further attention for future incorporation in commercial CSPs, particularly, their operation and control. Therefore, the present study aims to develop a simple but efficient numerical model to allow the comprehensive analysis of thermocline storage systems aiming better understanding of their dynamic temperature response. The validation results, despite the simplifying assumptions of the numerical model, agree well with the experiments for the time evolution of the thermocline region. Three different cases are considered to test the versatility of the numerical model; for the particular type of a storage tank with top round impingement inlet, a simple analytical model was developed to take into consideration the increased turbulence level in the mixing region. The numerical predictions for the three cases are in general good agreement against the experimental results.

  12. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. The Role of Semantic Clustering in Optimal Memory Foraging.

    PubMed

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T

    2015-11-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in semantic memory may play a role in evidence for both theories. Labeled magnets and a whiteboard were used to elicit spatial representations of semantic knowledge about animals. Category recall sequences from a separate experiment were used to trace search paths over the spatial representations of animal knowledge. Results showed that spatial distances between animal names arranged on the whiteboard were correlated with inter-response intervals (IRIs) during category recall, and distributions of both dependent measures approximated inverse power laws associated with Lévy flights. In addition, IRIs were relatively shorter when paths first entered animal clusters, and longer when they exited clusters, which is consistent with marginal value theorem. In conclusion, area-restricted searches over clustered semantic spaces may account for two different patterns of results interpreted as supporting two different theories of optimal memory foraging. Copyright © 2015 Cognitive Science Society, Inc.

  14. Social contact patterns can buffer costs of forgetting in the evolution of cooperation.

    PubMed

    Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten

    2018-06-13

    Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).

  15. AMS-02 fits dark matter

    NASA Astrophysics Data System (ADS)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  16. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  17. Of lice and math: using models to understand and control populations of head lice.

    PubMed

    Laguna, María Fabiana; Laguna, Mara Fabiana; Risau-Gusman, Sebastián

    2011-01-01

    In this paper we use detailed data about the biology of the head louse (pediculus humanus capitis) to build a model of the evolution of head lice colonies. Using theory and computer simulations, we show that the model can be used to assess the impact of the various strategies usually applied to eradicate head lice, both conscious (treatments) and unconscious (grooming). In the case of treatments, we study the difference in performance that arises when they are applied in systematic and non-systematic ways. Using some reasonable simplifying assumptions (as random mixing of human groups and the same mobility for all life stages of head lice other than eggs) we model the contagion of pediculosis using only one additional parameter. It is shown that this parameter can be tuned to obtain collective infestations whose characteristics are compatible with what is given in the literature on real infestations. We analyze two scenarios: One where group members begin treatment when a similar number of lice are present in each head, and another where there is one individual who starts treatment with a much larger threshold ("superspreader"). For both cases we assess the impact of several collective strategies of treatment.

  18. Ambient mass density effects on the International Space Station (ISS) microgravity experiments

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Smith, R. E.

    1996-01-01

    The Marshall engineering thermosphere model was specified by NASA to be used in the design, development and testing phases of the International Space Station (ISS). The mass density is the atmospheric parameter which most affects the ISS. Under simplifying assumptions, the critical ambient neutral density required to produce one micro-g on the ISS is estimated using an atmospheric drag acceleration equation. Examples are presented for the critical density versus altitude, and for the critical density that is exceeded at least once a month and once per orbit during periods of low and high solar activity. An analysis of the ISS orbital decay is presented.

  19. Influence of thermal and velocity slip on the peristaltic flow of Cu-water nanofluid with magnetic field

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher

    2016-03-01

    The peristaltic flow of an incompressible viscous fluid containing copper nanoparticles in an asymmetric channel is discussed with thermal and velocity slip effects. The copper nanoparticles for the peristaltic flow water as base fluid is not explored so far. The equations for the purposed fluid model are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been calculated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The influence of various flow parameters on the flow and heat transfer characteristics is obtained.

  20. Metachronal wave analysis for non-Newtonian fluid under thermophoresis and Brownian motion effects

    NASA Astrophysics Data System (ADS)

    Shaheen, A.; Nadeem, S.

    This paper analyse the mathematical model of ciliary motion in an annulus. The effect of convective heat transfer and nanoparticle are taken into account. The governing equations of Jeffrey six-constant fluid along with heat and nanoparticle are modelled and then simplified by using long wavelength and low Reynolds number assumptions. The reduced equations are solved with the help of homotopy perturbation method. The obtained expressions for the velocity, temperature and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves. Streamlines has also been plotted at the last part of the paper.

  1. Magnetic field effects for copper suspended nanofluid venture through a composite stenosed arteries with permeable wall

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Butt, Adil Wahid

    2015-05-01

    In the present paper magnetic field effects for copper nanoparticles for blood flow through composite stenosis in arteries with permeable wall are discussed. The copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The effect of various flow parameters on the flow and heat transfer characteristics is utilized.

  2. The span as a fundamental factor in airplane design

    NASA Technical Reports Server (NTRS)

    Lachmann, G

    1928-01-01

    Previous theoretical investigations of steady curvilinear flight did not afford a suitable criterion of "maneuverability," which is very important for judging combat, sport and stunt-flying airplanes. The idea of rolling ability, i.e., of the speed of rotation of the airplane about its X axis in rectilinear flight at constant speed and for a constant, suddenly produced deflection of the ailerons, is introduced and tested under simplified assumptions for the air-force distribution over the span. This leads to the following conclusions: the effect of the moment of inertia about the X axis is negligibly small, since the speed of rotation very quickly reaches a uniform value.

  3. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  4. Actin-based propulsion of a microswimmer.

    PubMed

    Leshansky, A M

    2006-07-01

    A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.

  5. Theoretical analysis of oxygen diffusion at startup in an alkali metal heat pipe with gettered alloy walls

    NASA Technical Reports Server (NTRS)

    Tower, L. K.

    1973-01-01

    The diffusion of oxygen into, or out of, a gettered alloy exposed to oxygenated alkali liquid metal coolant, a situation arising in some high temperature heat transfer systems, was analyzed. The relation between the diffusion process and the thermochemistry of oxygen in the alloy and in the alkali metal was developed by making several simplifying assumptions. The treatment is therefore theoretical in nature. However, a practical example pertaining to the startup of a heat pipe with walls of T-111, a tantalum alloy, and lithium working fluid illustrates the use of the figures contained in the analysis.

  6. Combined effects of heat and mass transfer to magneto hydrodynamics oscillatory dusty fluid flow in a porous channel

    NASA Astrophysics Data System (ADS)

    Govindarajan, A.; Vijayalakshmi, R.; Ramamurthy, V.

    2018-04-01

    The main aim of this article is to study the combined effects of heat and mass transfer to radiative Magneto Hydro Dynamics (MHD) oscillatory optically thin dusty fluid in a saturated porous medium channel. Based on certain assumptions, the momentum, energy, concentration equations are obtained.The governing equations are non-dimensionalised, simplified and solved analytically. The closed analytical form solutions for velocity, temperature, concentration profiles are obtained. Numerical computations are presented graphically to show the salient features of various physical parameters. The shear stress, the rate of heat transfer and the rate of mass transfer are also presented graphically.

  7. Efficiency gain from elastic optical networks

    NASA Astrophysics Data System (ADS)

    Morea, Annalisa; Rival, Olivier

    2011-12-01

    We compare the cost-efficiency of optical networks based on mixed datarates (10, 40, 100Gb/s) and datarateelastic technologies. A European backbone network is examined under various traffic assumptions (volume of transported data per demand and total number of demands) to better understand the impact of traffic characteristics on cost-efficiency. Network dimensioning is performed for static and restorable networks (resilient to one-link failure). In this paper we will investigate the trade-offs between price of interfaces, reach and reconfigurability, showing that elastic solutions can be more cost-efficient than mixed-rate solutions because of the better compatibility between different datarates, increased reach of channels and simplified wavelength allocation.

  8. A Module Language for Typing by Contracts

    NASA Technical Reports Server (NTRS)

    Glouche, Yann; Talpin, Jean-Pierre; LeGuernic, Paul; Gautier, Thierry

    2009-01-01

    Assume-guarantee reasoning is a popular and expressive paradigm for modular and compositional specification of programs. It is becoming a fundamental concept in some computer-aided design tools for embedded system design. In this paper, we elaborate foundations for contract-based embedded system design by proposing a general-purpose module language based on a Boolean algebra allowing to define contracts. In this framework, contracts are used to negotiate the correctness of assumptions made on the definition of a component at the point where it is used and provides guarantees to its environment. We illustrate this presentation with the specification of a simplified 4-stroke engine model.

  9. Centrifugal inertia effects in two-phase face seal films

    NASA Technical Reports Server (NTRS)

    Basu, P.; Hughes, W. F.; Beeler, R. M.

    1987-01-01

    A simplified, semianalytical model has been developed to analyze the effect of centrifugal inertia in two-phase face seals. The model is based on the assumption of isothermal flow through the seal, but at an elevated temperature, and takes into account heat transfer and boiling. Using this model, seal performance curves are obtained with water as the working fluid. It is shown that the centrifugal inertia of the fluid reduces the load-carrying capacity dramatically at high speeds and that operational instability exists under certain conditions. While an all-liquid seal may be starved at speeds higher than a 'critical' value, leakage always occurs under boiling conditions.

  10. Simplified Habit Reversal Plus Adjunct Contingencies in the Treatment of Thumb Sucking and Hair Pulling in a Young Child.

    ERIC Educational Resources Information Center

    Long, Ethan S.; Miltenberger, Raymond G.; Rapp, John T.

    1999-01-01

    Using simplified reversal treatment consisting of awareness training, competing response training, and social support procedures, minimal results were initially obtained in thumb sucking and hair pulling behaviors. Additional treatment phases involving differential reinforcement and response cost resulted in near zero levels of the behavior when…

  11. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. On the combinatorics of sparsification.

    PubMed

    Huang, Fenix Wd; Reidys, Christian M

    2012-10-22

    We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.

  13. Multiscale Molecular Dynamics Model for Heterogeneous Charged Systems

    NASA Astrophysics Data System (ADS)

    Stanton, L. G.; Glosli, J. N.; Murillo, M. S.

    2018-04-01

    Modeling matter across large length scales and timescales using molecular dynamics simulations poses significant challenges. These challenges are typically addressed through the use of precomputed pair potentials that depend on thermodynamic properties like temperature and density; however, many scenarios of interest involve spatiotemporal variations in these properties, and such variations can violate assumptions made in constructing these potentials, thus precluding their use. In particular, when a system is strongly heterogeneous, most of the usual simplifying assumptions (e.g., spherical potentials) do not apply. Here, we present a multiscale approach to orbital-free density functional theory molecular dynamics (OFDFT-MD) simulations that bridges atomic, interionic, and continuum length scales to allow for variations in hydrodynamic quantities in a consistent way. Our multiscale approach enables simulations on the order of micron length scales and 10's of picosecond timescales, which exceeds current OFDFT-MD simulations by many orders of magnitude. This new capability is then used to study the heterogeneous, nonequilibrium dynamics of a heated interface characteristic of an inertial-confinement-fusion capsule containing a plastic ablator near a fuel layer composed of deuterium-tritium ice. At these scales, fundamental assumptions of continuum models are explored; features such as the separation of the momentum fields among the species and strong hydrogen jetting from the plastic into the fuel region are observed, which had previously not been seen in hydrodynamic simulations.

  14. Fuels for urban transit buses: a cost-effectiveness analysis.

    PubMed

    Cohen, Joshua T; Hammitt, James K; Levy, Jonathan I

    2003-04-15

    Public transit agencies have begun to adopt alternative propulsion technologies to reduce urban transit bus emissions associated with conventional diesel (CD) engines. Among the most popular alternatives are emission controlled diesel buses (ECD), defined here to be buses with continuously regenerating diesel particle filters burning low-sulfur diesel fuel, and buses burning compressed natural gas (CNG). This study uses a series of simplifying assumptions to arrive at first-order estimates for the incremental cost-effectiveness (CE) of ECD and CNG relative to CD. The CE ratio numerator reflects acquisition and operating costs. The denominator reflects health losses (mortality and morbidity) due to primary particulate matter (PM), secondary PM, and ozone exposure, measured as quality adjusted life years (QALYs). We find that CNG provides larger health benefits than does ECD (nine vs six QALYs annually per 1000 buses) but that ECD is more cost-effective than CNG (dollar 270 000 per QALY for ECD vs dollar 1.7 million to dollar 2.4 million for CNG). These estimates are subject to much uncertainty. We identify assumptions that contribute most to this uncertainty and propose potential research directions to refine our estimates.

  15. Measuring the diffusion of linguistic change

    PubMed Central

    Nerbonne, John

    2010-01-01

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335–357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question. PMID:21041207

  16. Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-10-01

    A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.

  17. NASA's Integrated Instrument Simulator Suite for Atmospheric Remote Sensing from Spaceborne Platforms (ISSARS) and Its Role for the ACE and GPM Missions

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn; hide

    2011-01-01

    Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.

  18. Measuring the diffusion of linguistic change.

    PubMed

    Nerbonne, John

    2010-12-12

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335-357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question.

  19. Reviewed approach to defining the Active Interlock Envelope for Front End ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Shaftan, T.

    To protect the NSLS-II Storage Ring (SR) components from damage from synchrotron radiation produced by insertion devices (IDs) the Active Interlock (AI) keeps electron beam within some safe envelope (a.k.a Active Interlock Envelope or AIE) in the transverse phase space. The beamline Front Ends (FEs) are designed under assumption that above certain beam current (typically 2 mA) the ID synchrotron radiation (IDSR) fan is produced by the interlocked e-beam. These assumptions also define how the ray tracing for FE is done. To simplify the FE ray tracing for typical uncanted ID it was decided to provide the Mechanical Engineering groupmore » with a single set of numbers (x,x’,y,y’) for the AIE at the center of the long (or short) ID straight section. Such unified approach to the design of the beamline Front Ends will accelerate the design process and save valuable human resources. In this paper we describe our new approach to defining the AI envelope and provide the resulting numbers required for design of the typical Front End.« less

  20. Gas Near a Wall: Shortened Mean Free Path, Reduced Viscosity, and the Manifestation of the Knudsen Layer in the Navier-Stokes Solution of a Shear Flow

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail V.

    2018-06-01

    For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. Under the simplifying assumption of constant temperature of the gas, the velocity profile becomes an explicit nonlinear function of the distance from the wall and exhibits a Knudsen boundary layer near the wall. To verify the validity of the obtained formula, we perform the Direct Simulation Monte Carlo computations for the shear flow of argon and nitrogen at normal density and temperature. We find excellent agreement between our velocity approximation and the computed DSMC velocity profiles both within the Knudsen boundary layer and away from it.

  1. Multi-Destination and Multi-Purpose Trip Effects in the Analysis of the Demand for Trips to a Remote Recreational Site

    NASA Astrophysics Data System (ADS)

    Martínez-Espiñeira, Roberto; Amoako-Tuffour, Joe

    2009-06-01

    One of the basic assumptions of the travel cost method for recreational demand analysis is that the travel cost is always incurred for a single purpose recreational trip. Several studies have skirted around the issue with simplifying assumptions and dropping observations considered as nonconventional holiday-makers or as nontraditional visitors from the sample. The effect of such simplifications on the benefit estimates remains conjectural. Given the remoteness of notable recreational parks, multi-destination or multi-purpose trips are not uncommon. This article examines the consequences of allocating travel costs to a recreational site when some trips were taken for purposes other than recreation and/or included visits to other recreational sites. Using a multi-purpose weighting approach on data from Gros Morne National Park, Canada, we conclude that a proper correction for multi-destination or multi-purpose trip is more of what is needed to avoid potential biases in the estimated effects of the price (travel-cost) variable and of the income variable in the trip generation equation.

  2. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  3. Space-time codependence of retinal ganglion cells can be explained by novel and separable components of their receptive fields.

    PubMed

    Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M

    2016-09-01

    Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  4. Direct vibro-elastography FEM inversion in Cartesian and cylindrical coordinate systems without the local homogeneity assumption

    NASA Astrophysics Data System (ADS)

    Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.

    2015-05-01

    To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.

  5. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).

  6. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  7. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  8. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    NASA Technical Reports Server (NTRS)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  9. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  10. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  11. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  12. Ionic transport in high-energy-density matter

    DOE PAGES

    Stanton, Liam G.; Murillo, Michael S.

    2016-04-08

    Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less

  13. Assessment of historical masonry pillars reinforced by CFRP strips

    NASA Astrophysics Data System (ADS)

    Fedele, Roberto; Rosati, Giampaolo; Biolzi, Luigi; Cattaneo, Sara

    2014-10-01

    In this methodological study, the ultimate response of masonry pillars strengthened by externally bonded Carbon Fiber Reinforced Polymer (CFRP) was investigated. Historical bricks were derived from a XVII century rural building, whilst a high strength mortar was utilized for the joints. The conventional experimental information, concerning the overall reaction force and relative displacements provided by "point" sensors (LVDTs and clip gauge), were herein enriched with no-contact, full-field kinematic measurements provided by 2D Digital Image Correlation (2D DIC). Experimental information were critically compared with prediction provided by an advanced three-dimensional models, based on nonlinear finite elements under the simplifying assumption of perfect adhesion between the reinforcement and the support.

  14. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  15. Managed care for Medicare: some considerations in designing effective information provision programs.

    PubMed

    Jayanti, R K

    2001-01-01

    Consumer information-processing theory provides a useful framework for policy makers concerned with regulating information provided by managed care organizations. The assumption that consumers are rational information processors and providing more information is better is questioned in this paper. Consumer research demonstrates that when faced with an uncertain decision, consumers adopt simplifying strategies leading to sub-optimal choices. A discussion on how consumers process risk information and the effects of various informational formats on decision outcomes is provided. Categorization theory is used to propose guidelines with regard to providing effective information to consumers choosing among competing managed care plans. Public policy implications borne out of consumer information-processing theory conclude the article.

  16. A modified Friedmann equation

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Watabiki, Y.

    2017-12-01

    We recently formulated a model of the universe based on an underlying W3-symmetry. It allows the creation of the universe from nothing and the creation of baby universes and wormholes for spacetimes of dimension 2, 3, 4, 6 and 10. Here we show that the classical large time and large space limit of these universes is one of exponential fast expansion without the need of a cosmological constant. Under a number of simplifying assumptions, our model predicts that w = ‑1.2 in the case of four-dimensional spacetime. The possibility of obtaining a w-value less than ‑1 is linked to the ability of our model to create baby universes and wormholes.

  17. Towards realistic modelling of spectral line formation - lessons learnt from red giants

    NASA Astrophysics Data System (ADS)

    Lind, Karin

    2015-08-01

    Many decades of quantitative spectroscopic studies of red giants have revealed much about the formation histories and interlinks between the main components of the Galaxy and its satellites. Telescopes and instrumentation are now able to deliver high-resolution data of superb quality for large stellar samples and Galactic archaeology has entered a new era. At the same time, we have learnt how simplifying physical assumptions in the modelling of spectroscopic data can bias the interpretations, in particular one-dimensional homogeneity and local thermodynamic equilibrium (LTE). I will present lessons learnt so far from non-LTE spectral line formation in 3D radiation-hydrodynamic atmospheres of red giants, the smaller siblings of red supergiants.

  18. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  19. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  20. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  1. Resonant behaviour of MHD waves on magnetic flux tubes. I - Connection formulae at the resonant surfaces. II - Absorption of sound waves by sunspots

    NASA Technical Reports Server (NTRS)

    Sakurai, Takashi; Goossens, Marcel; Hollweg, Joseph V.

    1991-01-01

    The present method of addressing the resonance problems that emerge in such MHD phenomena as the resonant absorption of waves at the Alfven resonance point avoids solving the fourth-order differential equation of dissipative MHD by recourse to connection formulae across the dissipation layer. In the second part of this investigation, the absorption of solar 5-min oscillations by sunspots is interpreted as the resonant absorption of sounds by a magnetic cylinder. The absorption coefficient is interpreted (1) analytically, under certain simplifying assumptions, and numerically, under more general conditions. The observed absorption coefficient magnitude is explained over suitable parameter ranges.

  2. Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle

    NASA Technical Reports Server (NTRS)

    Ciepluch, Carl C.

    1960-01-01

    Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.

  3. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  4. Parachute dynamics and stability analysis. [using nonlinear differential equations of motion

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. K.; Engdahl, R. A.

    1974-01-01

    The nonlinear differential equations of motion for a general parachute-riser-payload system are developed. The resulting math model is then applied for analyzing the descent dynamics and stability characteristics of both the drogue stabilization phase and the main descent phase of the space shuttle solid rocket booster (SRB) recovery system. The formulation of the problem is characterized by a minimum number of simplifying assumptions and full application of state-of-the-art parachute technology. The parachute suspension lines and the parachute risers can be modeled as elastic elements, and the whole system may be subjected to specified wind and gust profiles in order to assess their effects on the stability of the recovery system.

  5. CMG-Augmented Control of a Hovering VTOL Platform

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Moerder, D. D.

    2007-01-01

    This paper describes how Control Moment Gyroscopes (CMGs) can be used for stability augmentation to a thrust vectoring system for a generic Vertical Take-Off and Landing platform. The response characteristics of the platform which uses only thrust vectoring and a second configuration which includes a single-gimbal CMG array are simulated and compared for hovering flight while subject to severe air turbulence. Simulation results demonstrate the effectiveness of a CMG array in its ability to significantly reduce the agility requirement on the thrust vectoring system. Albeit simplifying physical assumptions on a generic CMG configuration, the numerical results also suggest that reasonably sized CMGs will likely be sufficient for a small hovering vehicle.

  6. The risk of collapse in abandoned mine sites: the issue of data uncertainty

    NASA Astrophysics Data System (ADS)

    Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi

    2016-04-01

    Ground collapses over abandoned underground mines constitute a new environmental risk in the world. The high risk associated with subsurface voids, together with lack of knowledge of the geometric and geomechanical features of mining areas, makes abandoned underground mines one of the current challenges for countries with a long mining history. In this study, a stability analysis of Montevecchia marl mine is performed in order to validate a general approach that takes into account the poor local information and the variability of the input data. The collapse risk was evaluated through a numerical approach that, starting with some simplifying assumptions, is able to provide an overview of the collapse probability. The final results is an easy-accessible-transparent summary graph that shows the collapse probability. This approach may be useful for public administrators called upon to manage this environmental risk. The approach tries to simplify this complex problem in order to achieve a roughly risk assessment, but, since it relies on just a small amount of information, any final user should be aware that a comprehensive and detailed risk scenario can be generated only through more exhaustive investigations.

  7. Glistening-region model for multipath studies

    NASA Astrophysics Data System (ADS)

    Groves, Gordon W.; Chow, Winston C.

    1998-07-01

    The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.

  8. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  9. Future habitat suitability for coral reef ecosystems under global warming and ocean acidification

    PubMed Central

    Couce, Elena; Ridgwell, Andy; Hendy, Erica J

    2013-01-01

    Rising atmospheric CO2 concentrations are placing spatially divergent stresses on the world's tropical coral reefs through increasing ocean surface temperatures and ocean acidification. We show how these two stressors combine to alter the global habitat suitability for shallow coral reef ecosystems, using statistical Bioclimatic Envelope Models rather than basing projections on any a priori assumptions of physiological tolerances or fixed thresholds. We apply two different modeling approaches (Maximum Entropy and Boosted Regression Trees) with two levels of complexity (one a simplified and reduced environmental variable version of the other). Our models project a marked temperature-driven decline in habitat suitability for many of the most significant and bio-diverse tropical coral regions, particularly in the central Indo-Pacific. This is accompanied by a temperature-driven poleward range expansion of favorable conditions accelerating up to 40–70 km per decade by 2070. We find that ocean acidification is less influential for determining future habitat suitability than warming, and its deleterious effects are centered evenly in both hemispheres between 5° and 20° latitude. Contrary to expectations, the combined impact of ocean surface temperature rise and acidification leads to little, if any, degradation in future habitat suitability across much of the Atlantic and areas currently considered ‘marginal’ for tropical corals, such as the eastern Equatorial Pacific. These results are consistent with fossil evidence of range expansions during past warm periods. In addition, the simplified models are particularly sensitive to short-term temperature variations and their projections correlate well with reported locations of bleaching events. Our approach offers new insights into the relative impact of two global environmental pressures associated with rising atmospheric CO2 on potential future habitats, but greater understanding of past and current controls on coral reef ecosystems is essential to their conservation and management under a changing climate. PMID:23893550

  10. Future habitat suitability for coral reef ecosystems under global warming and ocean acidification.

    PubMed

    Couce, Elena; Ridgwell, Andy; Hendy, Erica J

    2013-12-01

    Rising atmospheric CO2 concentrations are placing spatially divergent stresses on the world's tropical coral reefs through increasing ocean surface temperatures and ocean acidification. We show how these two stressors combine to alter the global habitat suitability for shallow coral reef ecosystems, using statistical Bioclimatic Envelope Models rather than basing projections on any a priori assumptions of physiological tolerances or fixed thresholds. We apply two different modeling approaches (Maximum Entropy and Boosted Regression Trees) with two levels of complexity (one a simplified and reduced environmental variable version of the other). Our models project a marked temperature-driven decline in habitat suitability for many of the most significant and bio-diverse tropical coral regions, particularly in the central Indo-Pacific. This is accompanied by a temperature-driven poleward range expansion of favorable conditions accelerating up to 40-70 km per decade by 2070. We find that ocean acidification is less influential for determining future habitat suitability than warming, and its deleterious effects are centered evenly in both hemispheres between 5° and 20° latitude. Contrary to expectations, the combined impact of ocean surface temperature rise and acidification leads to little, if any, degradation in future habitat suitability across much of the Atlantic and areas currently considered 'marginal' for tropical corals, such as the eastern Equatorial Pacific. These results are consistent with fossil evidence of range expansions during past warm periods. In addition, the simplified models are particularly sensitive to short-term temperature variations and their projections correlate well with reported locations of bleaching events. Our approach offers new insights into the relative impact of two global environmental pressures associated with rising atmospheric CO2 on potential future habitats, but greater understanding of past and current controls on coral reef ecosystems is essential to their conservation and management under a changing climate. © 2013 John Wiley & Sons Ltd.

  11. A simplified methylcoenzyme M methylreductase assay with artificial electron donors and different preparations of component C from Methanobacterium thermoautotrophicum delta H.

    PubMed Central

    Hartzell, P L; Escalante-Semerena, J C; Bobik, T A; Wolfe, R S

    1988-01-01

    Different preparations of the methylreductase were tested in a simplified methylcoenzyme M methylreductase assay with artificial electron donors under a nitrogen atmosphere. ATP and Mg2+ stimulated the reaction. Tris(2,2'-bipyridine)ruthenium (II), chromous chloride, chromous acetate, titanium III citrate, 2,8-diaminoacridine, formamidinesulfinic acid, cob(I)alamin (B12s), and dithiothreitol were tested as electron donors; the most effective donor was titanium III citrate. Methylreductase (component C) was prepared by 80% ammonium sulfate precipitation, 70% ammonium sulfate precipitation, phenyl-Sepharose chromatography, Mono Q column chromatography, DEAE-cellulose column chromatography, or tetrahydromethanopterin affinity column chromatography. Methylreductase preparations which were able to catalyze methanogenesis in the simplified reaction mixture contained contaminating proteins. Homogeneous component C obtained from a tetrahydromethanopterin affinity column was not active in the simplified assay but was active in a methylreductase assay that contained additional protein components. Images PMID:3372480

  12. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  13. On the evolution of misunderstandings about evolutionary psychology.

    PubMed

    Young, J; Persell, R

    2000-04-01

    Some of the controversy surrounding evolutionary explanations of human behavior may be due to cognitive information-processing patterns that are themselves the result of evolutionary processes. Two such patterns are (1) the tendency to oversimplify information so as to reduce demand on cognitive resources and (2) our strong desire to generate predictability and stability from perceptions of the external world. For example, research on social stereotyping has found that people tend to focus automatically on simplified social-categorical information, to use such information when deciding how to behave, and to rely on such information even in the face of contradictory evidence. Similarly, an undying debate over nature vs. nurture is shaped by various data-reduction strategies that frequently oversimplify, and thus distort, the intent of the supporting arguments. This debate is also often marked by an assumption that either the nature or the nurture domain may be justifiably excluded at an explanatory level because one domain appears to operate in a sufficiently stable and predictable way for a particular argument. As a result, critiques in-veighed against evolutionary explanations of behavior often incorporate simplified--and erroneous--assumptions about either the mechanics of how evolution operates or the inevitable implications of evolution for understanding human behavior. The influences of these tendencies are applied to a discussion of the heritability of behavioral characteristics. It is suggested that the common view that Mendelian genetics can explain the heritability of complex behaviors, with a one-gene-one-trait process, is misguided. Complex behaviors are undoubtedly a product of a more complex interaction between genes and environment, ensuring that both nature and nurture must be accommodated in a yet-to-be-developed post-Mendelian model of genetic influence. As a result, current public perceptions of evolutionary explanations of behavior are handicapped by the lack of clear articulation of the relationship between inherited genes and manifest behavior.

  14. Comparison of chlorine and ammonia concentration field trial data with calculated results from a Gaussian atmospheric transport and dispersion model.

    PubMed

    Bauer, Timothy J

    2013-06-15

    The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.

  15. Low Thrust Cis-Lunar Transfers Using a 40 kW-Class Solar Electric Propulsion Spacecraft

    NASA Technical Reports Server (NTRS)

    Mcguire, Melissa L.; Burke, Laura M.; Mccarty, Steven L.; Hack, Kurt J.; Whitley, Ryan J.; Davis, Diane C.; Ocampo, Cesar

    2017-01-01

    This paper captures trajectory analysis of a representative low thrust, high power Solar Electric Propulsion (SEP) vehicle to move a mass around cis-lunar space in the range of 20 to 40 kW power to the Electric Propulsion (EP) system. These cis-lunar transfers depart from a selected Near Rectilinear Halo Orbit (NRHO) and target other cis-lunar orbits. The NRHO cannot be characterized in the classical two-body dynamics more familiar in the human spaceflight community, and the use of low thrust orbit transfers provides unique analysis challenges. Among the target orbit destinations documented in this paper are transfers between a Southern and Northern NRHO, transfers between the NRHO and a Distant Retrograde Orbit (DRO) and a transfer between the NRHO and two different Earth Moon Lagrange Point 2 (EML2) Halo orbits. Because many different NRHOs and EML2 halo orbits exist, simplifying assumptions rely on previous analysis of orbits that meet current abort and communication requirements for human mission planning. Investigation is done into the sensitivities of these low thrust transfers to EP system power. Additionally, the impact of the Thrust to Weight ratio of these low thrust SEP systems and the ability to transit between these unique orbits are investigated.

  16. Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis

    NASA Astrophysics Data System (ADS)

    JEONG, TAESEOK; SINGH, RAJENDRA

    2000-06-01

    This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.

  17. Stress intensity factors in two bonded elastic layers containing cracks perpendicular to and on the interface. I Analysis. II - Solution and results

    NASA Technical Reports Server (NTRS)

    Lu, M.-C.; Erdogan, F.

    1983-01-01

    The basic crack problem which is essential for the study of subcritical crack propagation and fracture of layered structural materials is considered. Because of the apparent analytical difficulties, the problem is idealized as one of plane strain or plane stress. An additional simplifying assumption is made by restricting the formulation of the problem to crack geometries and loading conditions which have a plane of symmetry perpendicular to the interface. The general problem is formulated in terms of a coupled systems of four integral equations. For each relevant crack configuration of practical interest, the singular behavior of the solution near and at the ends and points of intersection of the cracks is investigated and the related characteristic equations are obtained. The edge crack terminating at and crossing the interface, the T-shaped crack consisting of a broken layer and a delamination crack, the cross-shaped crack which consists of a delamination crack intersecting a crack which is perpendicular to the interface, and a delamination crack initiating from a stress-free boundary of the bonded layers are some of the practical crack geometries considered. Previously announced in STAR as N80-18428 and N80-18429

  18. A theoretical study of the acoustic impedance of orifices in the presence of a steady grazing flow

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1976-01-01

    An analysis of the oscillatory fluid flow in the vicinity of a circular orifice with a steady grazing flow is presented. The study is similar to that of Hersh and Rogers but with the addition of the grazing flow. Starting from the momentum and continuity equations, a considerably simplified system of partial differential equations is developed with the assumption that the flow can be described by an oscillatory motion superimposed upon the known steady flow. The equations are seen to be linear in the region where the grazing flow effects are dominant, and a solution and the resulting orifice impedance are presented for this region. The nonlinearity appears to be unimportant for the usual conditions found in aircraft noise suppressors. Some preliminary conclusions of the study are that orifice resistance is directly proportional to grazing flow velocity (known previously from experimental data) and that the orifice inductive (mass reactance) end correction is not a function of grazing flow. This latter conclusion is contrary to the widely held notion that grazing flow removes the effect of the orifice inductive end correction. This conclusion also implies that the experimentally observed total inductance reduction with grazing flow might be in the flow within the orifice rather than in the end correction.

  19. Electronic Cigarettes and Indoor Air Quality: A Simple Approach to Modeling Potential Bystander Exposures to Nicotine

    PubMed Central

    Colard, Stéphane; O’Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D.

    2014-01-01

    There has been rapid growth in the use of electronic cigarettes (“vaping”) in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents. PMID:25547398

  20. A vortex model for forces and moments on low-aspect-ratio wings in side-slip with experimental validation

    PubMed Central

    DeVoria, Adam C.

    2017-01-01

    This paper studies low-aspect-ratio () rectangular wings at high incidence and in side-slip. The main objective is to incorporate the effects of high angle of attack and side-slip into a simplified vortex model for the forces and moments. Experiments are also performed and are used to validate assumptions made in the model. The model asymptotes to the potential flow result of classical aerodynamics for an infinite aspect ratio. The → 0 limit of a rectangular wing is considered with slender body theory, where the side-edge vortices merge into a vortex doublet. Hence, the velocity fields transition from being dominated by a spanwise vorticity monopole ( ≫ 1) to a streamwise vorticity dipole ( ∼ 1). We theoretically derive a spanwise loading distribution that is parabolic instead of elliptic, and this physically represents the additional circulation around the wing that is associated with reattached flow. This is a fundamental feature of wings with a broad-facing leading edge. The experimental measurements of the spanwise circulation closely approximate a parabolic distribution. The vortex model yields very agreeable comparison with direct measurement of the lift and drag, and the roll moment prediction is acceptable for ≤ 1 prior to the roll stall angle and up to side-slip angles of 20°. PMID:28293139

  1. Algebraic aspects of the driven dynamics in the density operator and correlation functions calculation for multi-level open quantum systems

    NASA Astrophysics Data System (ADS)

    Bogolubov, Nikolai N.; Soldatov, Andrey V.

    2017-12-01

    Exact and approximate master equations were derived by the projection operator method for the reduced statistical operator of a multi-level quantum system with finite number N of quantum eigenstates interacting with arbitrary external classical fields and dissipative environment simultaneously. It was shown that the structure of these equations can be simplified significantly if the free Hamiltonian driven dynamics of an arbitrary quantum multi-level system under the influence of the external driving fields as well as its Markovian and non-Markovian evolution, stipulated by the interaction with the environment, are described in terms of the SU(N) algebra representation. As a consequence, efficient numerical methods can be developed and employed to analyze these master equations for real problems in various fields of theoretical and applied physics. It was also shown that literally the same master equations hold not only for the reduced density operator but also for arbitrary nonequilibrium multi-time correlation functions as well under the only assumption that the system and the environment are uncorrelated at some initial moment of time. A calculational scheme was proposed to account for these lost correlations in a regular perturbative way, thus providing additional computable terms to the correspondent master equations for the correlation functions.

  2. An Overview of Modifications Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.

    2013-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  3. Searches for supersymmetry with the ATLAS detector using final states with two leptons and missing transverse momentum in s = 7   TeV proton–proton collisions

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2012-02-03

    Results of three searches are presented for the production of supersymmetric particles decaying into final states with missing transverse momentum and exactly two isolated leptons, e or μ . The analysis uses a data sample collected during the first half of 2011 that corresponds to a total integrated luminosity of 1 fb -1 of √s=7 TeV proton–proton collisions recorded with the ATLAS detector at the Large Hadron Collider. Opposite-sign and same-sign dilepton events are separately studied, with no deviations from the Standard Model expectation observed. Additionally, in opposite-sign events, a search is made for an excess of same-flavour over different-flavourmore » lepton pairs. Effective production cross sections in excess of 9.9 fb for opposite-sign events containing supersymmetric particles with missing transverse momentum greater than 250 GeV are excluded at 95% CL. For same-sign events containing supersymmetric particles with missing transverse momentum greater than 100 GeV, effective production cross sections in excess of 14.8 fb are excluded at 95% CL. The latter limit is interpreted in a simplified electroweak gaugino production model excluding chargino masses up to 200 GeV, under the assumption that slepton decay is dominant.« less

  4. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  5. Efficiencies of power plants, quasi-static models and the geometric-mean temperature

    NASA Astrophysics Data System (ADS)

    Johal, Ramandeep S.

    2017-02-01

    Observed efficiencies of industrial power plants are often approximated by the square-root formula: 1 - √ T -/ T +, where T +( T -) is the highest (lowest) temperature achieved in the plant. This expression can be derived within finite-time thermodynamics, or, by entropy generation minimization, based on finite rates for the processes. In these analyses, a closely related quantity is the optimal value of the intermediate temperature for the hot stream, given by the geometric-mean value: √ T +/ T -. In this paper, instead of finite-time models, we propose to model the operation of plants by quasi-static work extraction models, with one reservoir (source/sink) as finite, while the other as practically infinite. No simplifying assumption is made on the nature of the finite system. This description is consistent with two model hypotheses, each yielding a specific value of the intermediate temperature, say T 1 and T 2. The lack of additional information on validity of the hypothesis that may be actually realized, motivates to approach the problem as an exercise in inductive inference. Thus we define an expected value of the intermediate temperature as the equally weighted mean: ( T 1 + T 2)/2. It is shown that the expected value is very closely given by the geometric-mean value for almost all of the observed power plants.

  6. Predicting the Effects of Powder Feeding Rates on Particle Impact Conditions and Cold Spray Deposited Coatings

    NASA Astrophysics Data System (ADS)

    Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.

    2017-10-01

    As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.

  7. Broadening and Simplifying the First SETI Protocol

    NASA Astrophysics Data System (ADS)

    Michaud, M. A. G.

    The Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence, known informally as the First SETI Protocol, is the primary existing international guidance on this subject. During the fifteen years since the document was issued, several people have suggested revisions or additional protocols. This article proposes a broadened and simplified text that would apply to the detection of alien technology in our solar system as well as to electromagnetic signals from more remote sources.

  8. Panel Absorber

    NASA Astrophysics Data System (ADS)

    MECHEL, F. P.

    2001-11-01

    A plane wave is incident on a simply supported elastic plate covering a back volume; the arrangement is surrounded by a hard baffle wall. The plate may be porous with a flow friction resistance; the back volume may be filled either with air or with a porous material. The back volume may be bulk reacting (i.e., with sound propagation parallel to the plate) or locally reacting. Since this arrangement is of some importance in room acoustics, Cremer in his book about room acoustics [1] has presented an approximate analysis. However, Cremer's analysis uses a number of assumptions which make his solution, in his own estimate, unsuited for low frequencies, where, on the other hand, the arrangement mainly is applied. This paper presents a sound field description which uses modal analysis. It is applicable not only in the far field, but also near the absorber. Further, approximate solutions are derived, based on simplifying assumptions like Cremer has used. The modal analysis solution is of interest not only as a reference for approximations but also for practical applications, because the aspect of computing time becomes more and more unimportant (the 3D-plots presented below for the sound field were evaluated with modal analysis in about 6 s).

  9. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  10. Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings

    PubMed Central

    LaFave, Daniel; Thomas, Duncan

    2016-01-01

    The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430

  11. Compressive properties of passive skeletal muscle-the impact of precise sample geometry on parameter identification in inverse finite element analysis.

    PubMed

    Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias

    2012-10-11

    Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Inferences about unobserved causes in human contingency learning.

    PubMed

    Hagmayer, York; Waldmann, Michael R

    2007-03-01

    Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.

  13. The evolutionary interplay of intergroup conflict and altruism in humans: a review of parochial altruism theory and prospects for its extension

    PubMed Central

    Rusch, Hannes

    2014-01-01

    Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. PMID:25253457

  14. Direct numerical simulation of leaky dielectrics with application to electrohydrodynamic atomization

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Desjardins, Olivier

    2013-11-01

    Electrohydrodynamics (EHD) have the potential to greatly enhance liquid break-up, as demonstrated in numerical simulations by Van Poppel et al. (JCP (229) 2010). In liquid-gas EHD flows, the ratio of charge mobility to charge convection timescales can be used to determine whether the charge can be assumed to exist in the bulk of the liquid or at the surface only. However, for EHD-aided fuel injection applications, these timescales are of similar magnitude and charge mobility within the fluid might need to be accounted for explicitly. In this work, a computational approach for simulating two-phase EHD flows including the charge transport equation is presented. Under certain assumptions compatible with a leaky dielectric model, charge transport simplifies to a scalar transport equation that is only defined in the liquid phase, where electric charges are present. To ensure consistency with interfacial transport, the charge equation is solved using a semi-Lagrangian geometric transport approach, similar to the method proposed by Le Chenadec and Pitsch (JCP (233) 2013). This methodology is then applied to EHD atomization of a liquid kerosene jet, and compared to results produced under the assumption of a bulk volumetric charge.

  15. A simplified rotor system mathematical model for piloted flight dynamics simulation

    NASA Technical Reports Server (NTRS)

    Chen, R. T. N.

    1979-01-01

    The model was developed for real-time pilot-in-the-loop investigation of helicopter flying qualities. The mathematical model included the tip-path plane dynamics and several primary rotor design parameters, such as flapping hinge restraint, flapping hinge offset, blade Lock number, and pitch-flap coupling. The model was used in several exploratory studies of the flying qualities of helicopters with a variety of rotor systems. The basic assumptions used and the major steps involved in the development of the set of equations listed are described. The equations consisted of the tip-path plane dynamic equation, the equations for the main rotor forces and moments, and the equation for control phasing required to achieve decoupling in pitch and roll due to cyclic inputs.

  16. Spontaneously Broken Neutral Symmetry in an Ecological System

    NASA Astrophysics Data System (ADS)

    Borile, C.; Muñoz, M. A.; Azaele, S.; Banavar, Jayanth R.; Maritan, A.

    2012-07-01

    Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest)—regardless of the species they belong to—have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species, treating the dynamics in a species-symmetric manner. We demonstrate that dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.

  17. Research study on high energy radiation effect and environment solar cell degradation methods

    NASA Technical Reports Server (NTRS)

    Horne, W. E.; Wilkinson, M. C.

    1974-01-01

    The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.

  18. A methodology to select a wire insulation for use in habitable spacecraft.

    PubMed

    Paulos, T; Apostolakis, G

    1998-08-01

    This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.

  19. A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents

    NASA Technical Reports Server (NTRS)

    Rosen, J. M.; Hofmann, D. J.

    1977-01-01

    A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.

  20. Towards a theory of tiered testing.

    PubMed

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  1. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  2. Quantum vacuum interaction between two cosmic strings revisited

    NASA Astrophysics Data System (ADS)

    Muñoz-Castañeda, J. M.; Bordag, M.

    2014-03-01

    We reconsider the quantum vacuum interaction energy between two straight parallel cosmic strings. This problem was discussed several times in an approach treating both strings perturbatively and treating only one perturbatively. Here we point out that a simplifying assumption made by Bordag [Ann. Phys. (Berlin) 47, 93 (1990).] can be justified and show that, despite the global character of the background, the perturbative approach delivers a correct result. We consider the applicability of the scattering methods, developed in the past decade for the Casimir effect, for the cosmic string and find it not applicable. We calculate the scattering T-operator on one string. Finally, we consider the vacuum interaction of two strings when each carries a two-dimensional delta function potential.

  3. Trends and Techniques for Space Base Electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    Simulations of various phosphorus and boron diffusions in SOS were completed and a sputtering system, furnaces, and photolithography related equipment were set up. Double layer metal experiments initially utilized wet chemistry techniques. By incorporating ultrasonic etching of the vias, premetal cleaning a modified buffered HF, phosphorus doped vapox, and extended sintering, yields of 98% were obtained using the standard test pattern. A two dimensional modeling program was written for simulating short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide silicon interface. Although the program is incomplete, the two dimensional Poisson equation for the potential distribution was achieved. The status of other Z-D MOSFET simulation programs is summarized.

  4. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  5. Homogeneous-heterogeneous reactions in curved channel with porous medium

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Ayub, Sadia; Alsaedi, A.

    2018-06-01

    Purpose of the present investigation is to examine the peristaltic flow through porous medium in a curved conduit. Problem is modeled for incompressible electrically conducting Ellis fluid. Influence of porous medium is tackled via modified Darcy's law. The considered model utilizes homogeneous-heterogeneous reactions with equal diffusivities for reactant and autocatalysis. Constitutive equations are formulated in the presence of viscous dissipation. Channel walls are compliant in nature. Governing equations are modeled and simplified under the assumptions of small Reynolds number and large wavelength. Graphical results for velocity, temperature, heat transfer coefficient and homogeneous-heterogeneous reaction parameters are examined for the emerging parameters entering into the problem. Results reveal an activation in both homogenous-heterogenous reaction effect and heat transfer rate with increasing curvature of the channel.

  6. Methods for determining the internal thrust of scramjet engine modules from experimental data

    NASA Technical Reports Server (NTRS)

    Voland, Randall T.

    1990-01-01

    Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.

  7. Characterization of geostationary particle signatures based on the 'injection boundary' model

    NASA Technical Reports Server (NTRS)

    Mauk, B. H.; Meng, C.-I.

    1983-01-01

    A simplified analytical procedure is used to characterize the details of geostationary particle signatures, in order to lend support to the 'injection boundary' concept. The signatures are generated by the time-of-flight effects evolving from an initial sharply defined, double spiraled boundary configuration. Complex and highly variable dispersion patterns often observed by geostationary satellites are successfully reproduced through the exclusive use of the most fundamental convection configuration characteristics. Many of the details of the patterns have not been previously presented. It is concluded that most of the dynamical dispersion features can be mapped to the double spiral boundary without further ad hoc assumptions, and that predicted and observed dispersion patterns exhibit symmetries distinct from those associated with the quasi-stationary particle convection patterns.

  8. Fundamental structure of steady plastic shock waves in metals

    NASA Astrophysics Data System (ADS)

    Molinari, A.; Ravichandran, G.

    2004-02-01

    The propagation of steady plane shock waves in metallic materials is considered. Following the constitutive framework adopted by R. J. Clifton [Shock Waves and the Mechanical Properties of Solids, edited by J. J. Burke and V. Weiss (Syracuse University Press, Syracuse, N.Y., 1971), p. 73] for analyzing elastic-plastic transient waves, an analytical solution of the steady state propagation of plastic shocks is proposed. The problem is formulated in a Lagrangian setting appropriate for large deformations. The material response is characterized by a quasistatic tensile (compression) test (providing the isothermal strain hardening law). In addition the elastic response is determined up to second order elastic constants by ultrasonic measurements. Based on this simple information, it is shown that the shock kinetics can be quite well described for moderate shocks in aluminum with stress amplitude up to 10 GPa. Under the later assumption, the elastic response is assumed to be isentropic, and thermomechanical coupling is neglected. The model material considered here is aluminum, but the analysis is general and can be applied to any viscoplastic material subjected to moderate amplitude shocks. Comparisons with experimental data are made for the shock velocity, the particle velocity and the shock structure. The shock structure is obtained by quadrature of a first order differential equation, which provides analytical results under certain simplifying assumptions. The effects of material parameters and loading conditions on the shock kinetics and shock structure are discussed. The shock width is characterized by assuming an overstress formulation for the viscoplastic response. The effects on the shock structure of strain rate sensitivity are analyzed and the rationale for the J. W. Swegle and D. E. Grady [J. Appl. Phys. 58, 692 (1985)] universal scaling law for homogeneous materials is explored. Finally, the ability to deduce information on the viscoplastic response of materials subjected to very high strain rates from shock wave experiments is discussed.

  9. Numerical analysis of one-dimensional temperature data for groundwater/surface-water exchange with 1DTempPro

    NASA Astrophysics Data System (ADS)

    Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.

    2012-12-01

    Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.

  10. Evolutionary relatedness does not predict competition and co-occurrence in natural or experimental communities of green algae

    PubMed Central

    Alexandrou, Markos A.; Cardinale, Bradley J.; Hall, John D.; Delwiche, Charles F.; Fritschie, Keith; Narwani, Anita; Venail, Patrick A.; Bentlage, Bastian; Pankey, M. Sabrina; Oakley, Todd H.

    2015-01-01

    The competition-relatedness hypothesis (CRH) predicts that the strength of competition is the strongest among closely related species and decreases as species become less related. This hypothesis is based on the assumption that common ancestry causes close relatives to share biological traits that lead to greater ecological similarity. Although intuitively appealing, the extent to which phylogeny can predict competition and co-occurrence among species has only recently been rigorously tested, with mixed results. When studies have failed to support the CRH, critics have pointed out at least three limitations: (i) the use of data poor phylogenies that provide inaccurate estimates of species relatedness, (ii) the use of inappropriate statistical models that fail to detect relationships between relatedness and species interactions amidst nonlinearities and heteroskedastic variances, and (iii) overly simplified laboratory conditions that fail to allow eco-evolutionary relationships to emerge. Here, we address these limitations and find they do not explain why evolutionary relatedness fails to predict the strength of species interactions or probabilities of coexistence among freshwater green algae. First, we construct a new data-rich, transcriptome-based phylogeny of common freshwater green algae that are commonly cultured and used for laboratory experiments. Using this new phylogeny, we re-analyse ecological data from three previously published laboratory experiments. After accounting for the possibility of nonlinearities and heterogeneity of variances across levels of relatedness, we find no relationship between phylogenetic distance and ecological traits. In addition, we show that communities of North American green algae are randomly composed with respect to their evolutionary relationships in 99% of 1077 lakes spanning the continental United States. Together, these analyses result in one of the most comprehensive case studies of how evolutionary history influences species interactions and community assembly in both natural and experimental systems. Our results challenge the generality of the CRH and suggest it may be time to re-evaluate the validity and assumptions of this hypothesis. PMID:25473009

  11. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.

  12. Super-resolution with an SLM and two intensity images

    NASA Astrophysics Data System (ADS)

    Alcalá Ochoa, Noé; de León, Y. Ponce

    2018-06-01

    It is reported a method which may simplify the optical setups used to achieve super-resolution through the amplitude multiplication of two waves. For this end we decompose a super-resolving pupil into two complex masks and with the aid of a Spatial Light Modulator (LCoS) we obtain two intensity images that are subtracted. With this proposal, the traditional experimental optical setups are considerably simplified, with the additional benefit that different masks can be utilized without needing to perform the setup alignment each time.

  13. IODC 1998 Lens Design Problem Revisited: A Strategy for Simplifying Glass Choices in an Apochromatic Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seppala, L G

    2000-09-15

    A glass-choice strategy, based on separately designing an achromatic lens before progressing to an apochromatic lens, simplified my approach to solving the International Optical Design Conference (IODC) 1998 lens design problem. The glasses that are needed to make the lens apochromatic are combined into triplet correctors with two ''buried'' surfaces. By applying this strategy, I reached successful solutions that used only six glasses--three glasses for the achromatic design and three additional glasses for the apochromatic design.

  14. A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces

    PubMed Central

    Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.

    2014-01-01

    A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895

  15. Post-reionization Kinetic Sunyaev-Zel'dovich Signal in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Park, Hyunbae; Alvarez, Marcelo A.; Bond, John Richard

    2017-06-01

    Using Illustris, a state-of-art cosmological simulation of gravity, hydrodynamics, and star-formation, we revisit the calculation the angular power spectrum of the kinetic Sunyaev-Zel'dovich effect from the post-reionization (z < 6) epoch by Shaw et al. (2012). We not only report the updated value given by the analytical model used in previous studies, but go over the simplifying assumptions made in the model. The assumptions include using gas density for free electron density and neglecting the connected term arising due to the fourth order nature of momentum power spectrum that sources the signal. With these assumptions, Illustris gives slightly (˜ 10%) larger signal than in their work. Then, the signal is reduced by ˜ 20% when using actual free electron density in the calculation instead of gas density. This is because larger neutral fraction in dense regions results in loss of total free electron and suppression of fluctuations in free electron density. We find that the connected term can take up to half of the momentum power spectrum at z < 2. Due to a strong suppression of low-z signal by baryonic physics, the extra contribution from the connected term to ˜ 10% level although it may have been underestimated due to the finite box-size of Illustris. With these corrections, our result is very close to the original result of Shaw et al. (2012), which is well described by a simple power-law, D_l = 1.38[l/3000]0.21 μK^2, at 3000 < l < 10000.

  16. 75 FR 34277 - Federal Acquisition Regulation; FAR Case 2008-007, Additional Requirements for Market Research

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-16

    ...The Civilian Agency Acquisition Council and the Defense Acquisition Regulations Council (Councils) have agreed on an interim rule amending the Federal Acquisition Regulation (FAR) to implement Section 826 of the National Defense Authorization Act for Fiscal Year 2008 (FY08 NDAA). Section 826 established additional requirements in subsection (c) of 10 U.S.C. 2377. As a matter of policy, these requirements are extended to all executive agencies. Specifically, the head of the agency must conduct market research before issuing an indefinite-delivery indefinite-quantity (ID/IQ) task or delivery order for a noncommercial item in excess of the simplified acquisition threshold. In addition, a prime contractor with a contract in excess of $5 million for the procurement of items other than commercial items is required to conduct market research before making purchases that exceed the simplified acquisition threshold for or on behalf of the Government.

  17. Simulation of the Ozone Monitoring Instrument Aerosol Index Using the NASA Goddard Earth Observing System Aerosol Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Colarco, Peter R.; Gasso, Santiago; Ahn, Changwoo; Buchard, Virginie; Da Silva, Arlindo M.; Torres, Omar

    2017-01-01

    We provide an analysis of the commonly used Ozone Monitoring Instrument (OMI) aerosol index (AI) product for qualitative detection of the presence and loading of absorbing aerosols. In our analysis, simulated top-of-atmosphere (TOA) radiances are produced at the OMI footprints from a model atmosphere and aerosol profile provided by the NASA Goddard Earth Observing System (GEOS-5) Modern-Era Retrospective Analysis for Research and Applications aerosol reanalysis (MERRAero). Having established the credibility of the MERRAero simulation of the OMI AI in a previous paper we describe updates in the approach and aerosol optical property assumptions. The OMI TOA radiances are computed in cloud-free conditions from the MERRAero atmospheric state, and the AI is calculated. The simulated TOA radiances are fed to the OMI aerosol retrieval algorithms, and its retrieved AI (OMAERUV AI) is compared to the MERRAero calculated AI. Two main sources of discrepancy are discussed: one pertaining the OMI algorithm assumptions of the surface pressure, which are generally different from what the actual surface pressure of an observation is, and the other related to simplifying assumptions in the molecular atmosphere radiative transfer used in the OMI algorithms. Surface pressure assumptions lead to systematic biases in the OMAERUV AI, particularly over the oceans. Simplifications in the molecular radiative transfer lead to biases particularly in regions of topography intermediate to surface pressures of 600hPa and 1013.25hPa. Generally, the errors in the OMI AI due to these considerations are less than 0.2 in magnitude, though larger errors are possible, particularly over land. We recommend that future versions of the OMI algorithms use surface pressures from readily available atmospheric analyses combined with high-spatial resolution topographic maps and include more surface pressure nodal points in their radiative transfer lookup tables.

  18. Simulation of the Ozone Monitoring Instrument aerosol index using the NASA Goddard Earth Observing System aerosol reanalysis products

    NASA Astrophysics Data System (ADS)

    Colarco, Peter R.; Gassó, Santiago; Ahn, Changwoo; Buchard, Virginie; da Silva, Arlindo M.; Torres, Omar

    2017-11-01

    We provide an analysis of the commonly used Ozone Monitoring Instrument (OMI) aerosol index (AI) product for qualitative detection of the presence and loading of absorbing aerosols. In our analysis, simulated top-of-atmosphere (TOA) radiances are produced at the OMI footprints from a model atmosphere and aerosol profile provided by the NASA Goddard Earth Observing System (GEOS-5) Modern-Era Retrospective Analysis for Research and Applications aerosol reanalysis (MERRAero). Having established the credibility of the MERRAero simulation of the OMI AI in a previous paper we describe updates in the approach and aerosol optical property assumptions. The OMI TOA radiances are computed in cloud-free conditions from the MERRAero atmospheric state, and the AI is calculated. The simulated TOA radiances are fed to the OMI near-UV aerosol retrieval algorithms (known as OMAERUV) is compared to the MERRAero calculated AI. Two main sources of discrepancy are discussed: one pertaining to the OMI algorithm assumptions of the surface pressure, which are generally different from what the actual surface pressure of an observation is, and the other related to simplifying assumptions in the molecular atmosphere radiative transfer used in the OMI algorithms. Surface pressure assumptions lead to systematic biases in the OMAERUV AI, particularly over the oceans. Simplifications in the molecular radiative transfer lead to biases particularly in regions of topography intermediate to surface pressures of 600 and 1013.25 hPa. Generally, the errors in the OMI AI due to these considerations are less than 0.2 in magnitude, though larger errors are possible, particularly over land. We recommend that future versions of the OMI algorithms use surface pressures from readily available atmospheric analyses combined with high-spatial-resolution topographic maps and include more surface pressure nodal points in their radiative transfer lookup tables.

  19. Preliminary findings on the effects of geometry on two-phase flow through volcanic conduits

    NASA Astrophysics Data System (ADS)

    Mitchell, K. L.; Wilson, L.; Lane, S. J.; James, M. R.

    2003-04-01

    We attempt to ascertain whether some of the geometrical assumptions utilised in modelling of flows through volcanic conduits are valid. Flow is often assumed to be through a vertical conduit, but some volcanoes, such as Pu'u 'O'o (Kilauea, Hawai'i) and Stromboli (Italy), are known to exhibit inclined or more complex conduit systems. Our numerical and experimental studies have revealed that conduit inclination is a first-order influence on flow properties and eruptive style. Even a few degrees of inclination from vertical can increase gas-liquid phase separation by locally enhancing the gas volume fraction on the upper surface of the conduit. We explore the consequences of phase separation and slug flow for styles of magmatic eruption, and consider how these apply to particular eruptions. Modellers also tend to assume a simple parallel-sided geometry for volcanic conduits. Some have used a pressure-balanced assumption allowing conduits to choke and flare, resulting in higher eruption velocities. The pressure-balanced assumption is flawed in that it does not deal with the effects of compressibility and associated shocks when the flow is supersonic. Both parallel-sided and pressure-balanced assumptions avoid addressing how conduit shape evolves from an initial dyke-shaped fracture. However, we assert that evolution of conduit shape is impossible to quantify accurately using a deterministic approach. Therefore we adopt a simplified approach, with the initial conduit shape as a blade-shaped dyke, and the potential end-member as a system that is pressure-balanced up to the supersonic choking point and undetermined beyond (flow is constrained by a narrow jet envelope and not by the walls). Intermediate geometries are assumed to change quasi-steadily at locations where conduit wall stresses are high, and the consequences of these geometries are explored. We find that quite small changes in conduit geometry, which are likely to occur in volcanic systems, can have a significant effect on flow speeds.

  20. 24 CFR 92.252 - Qualification as affordable housing: Rental housing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... include average occupancy per unit and adjusted income assumptions. (b) Additional Rent limitations. In... provides the HOME rent limits which include average occupancy per unit and adjusted income assumptions... occupied only by households that are eligible as low-income families and must meet the following...

  1. Defense and the Economy

    DTIC Science & Technology

    1993-01-01

    Assumptions .......................................................... 15 b. Modeling Productivity ...and a macroeconomic model of the U.S. economy, designed to provide long-range projections 3 consistent with trends in production technology, shifts in...investments in roads, bridges, sewer systems, etc. In addition to these modeling assumptions, we also have introduced productivity increases to reflect the

  2. Simplified refracting technique in keratoconus.

    PubMed

    Gasset, A R

    1975-01-01

    A simple but effective technique to refract keratoconus patients is presented. The theoretical objection to these methods are discussed. In addition, a formula to calculate lenticular astigmatism is presented.

  3. A lattice Boltzmann model for the Burgers-Fisher equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.

  4. Bartnik’s splitting conjecture and Lorentzian Busemann function

    NASA Astrophysics Data System (ADS)

    Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof

    2018-05-01

    In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.

  5. On the Use of Rank Tests and Estimates in the Linear Model.

    DTIC Science & Technology

    1982-06-01

    assumption A5, McKean and Hettmansperger (1976) show that 10 w (W(N-c) - W (c+l))/ (2Z /2) (14) where 2Z is the 1-a interpercentile range of the standard...r(.75n) - r(.25n)) (13) The window width h incorporates a resistant estimate of scale, then interquartile range of the residuals, and a normalizing...alternative estimate of i is available with the additional assumption of symmetry of the error distribution. ASSUMPTION: A5. Suppose the underlying error

  6. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, P.J.

    A forthcoming revision to the R6 Leak-before-Break Assessment Procedure is briefly described. Practical application of the LbB concepts to safety-critical nuclear plant is illustrated by examples covering both low temperature and high temperature (>450{degrees}C) operating regimes. The examples highlight a number of issues which can make the development of a satisfactory LbB case problematic: for example, coping with highly loaded components, methodology assumptions and the definition of margins, the effect of crack closure owing to weld residual stresses, complex thermal stress fields or primary bending fields, the treatment of locally high stresses at crack intersections with free surfaces, the choicemore » of local limit load solution when predicting ligament breakthrough, and the scope of calculations required to support even a simplified LbB case for high temperature steam pipe-work systems.« less

  8. Magnetosphere - Ionosphere - Thermosphere (MIT) Coupling at Jupiter

    NASA Astrophysics Data System (ADS)

    Yates, J. N.; Ray, L. C.; Achilleos, N.

    2017-12-01

    Jupiter's upper atmospheric temperature is considerably higher than that predicted by Solar Extreme Ultraviolet (EUV) heating alone. Simulations incorporating magnetosphere-ionosphere coupling effects into general circulation models have, to date, struggled to reproduce the observed atmospheric temperatures under simplifying assumptions such as azimuthal symmetry and a spin-aligned dipole magnetic field. Here we present the development of a full three-dimensional thermosphere model coupled in both hemispheres to an axisymmetric magnetosphere model. This new coupled model is based on the two-dimensional MIT model presented in Yates et al., 2014. This coupled model is a critical step towards to the development of a fully coupled 3D MIT model. We discuss and compare the resulting thermospheric flows, energy balance and MI coupling currents to those presented in previous 2D MIT models.

  9. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  10. Comparison of calculated and measured pressures on straight and swept-tip model rotor blades

    NASA Technical Reports Server (NTRS)

    Tauber, M. E.; Chang, I. C.; Caughey, D. A.; Phillipe, J. J.

    1983-01-01

    Using the quasi-steady, full potential code, ROT22, pressures were calculated on straight and swept tip model helicopter rotor blades at advance ratios of 0.40 and 0.45, and into the transonic tip speed range. The calculated pressures were compared with values measured in the tip regions of the model blades. Good agreement was found over a wide range of azimuth angles when the shocks on the blade were not too strong. However, strong shocks persisted longer than predicted by ROT22 when the blade was in the second quadrant. Since the unsteady flow effects present at high advance ratios primarily affect shock waves, the underprediction of shock strengths is attributed to the simplifying, quasi-steady, assumption made in ROT22.

  11. Asymmetric Marcus-Hush theory for voltammetry.

    PubMed

    Laborda, Eduardo; Henstridge, Martin C; Batchelor-McAuley, Christopher; Compton, Richard G

    2013-06-21

    The current state-of-the-art in modeling the rate of electron transfer between an electroactive species and an electrode is reviewed. Experimental studies show that neither the ubiquitous Butler-Volmer model nor the more modern symmetric Marcus-Hush model are able to satisfactorily reproduce the experimental voltammetry for both solution-phase and surface-bound redox couples. These experimental deviations indicate the need for revision of the simplifying approximations used in the above models. Within this context, models encompassing asymmetry are considered which include different vibrational and solvation force constants for the electroactive species. The assumption of non-adiabatic electron transfer is also examined. These refinements have provided more satisfactory models of the electron transfer process and they enable us to gain more information about the microscopic characteristics of the system by means of simple electrochemical measurements.

  12. Ferromagnetic CNT suspended H2O+Cu nanofluid analysis through composite stenosed arteries with permeable wall

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher

    2015-08-01

    In the present article magnetic field effects for CNT suspended copper nanoparticles for blood flow through composite stenosed arteries with permeable wall are discussed. The CNT suspended copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the CNT suspended Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics is utilized. It is also observed that with the increase in slip parameter blood flows slowly in arteries and trapped bolus increases.

  13. The accuracy of the compressible Reynolds equation for predicting the local pressure in gas-lubricated textured parallel slider bearings

    PubMed Central

    Qiu, Mingfeng; Bailey, Brian N.; Stoll, Rob

    2014-01-01

    The validity of the compressible Reynolds equation to predict the local pressure in a gas-lubricated, textured parallel slider bearing is investigated. The local bearing pressure is numerically simulated using the Reynolds equation and the Navier-Stokes equations for different texture geometries and operating conditions. The respective results are compared and the simplifying assumptions inherent in the application of the Reynolds equation are quantitatively evaluated. The deviation between the local bearing pressure obtained with the Reynolds equation and the Navier-Stokes equations increases with increasing texture aspect ratio, because a significant cross-film pressure gradient and a large velocity gradient in the sliding direction develop in the lubricant film. Inertia is found to be negligible throughout this study. PMID:25049440

  14. Further analytical study of hybrid rocket combustion

    NASA Technical Reports Server (NTRS)

    Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.

    1972-01-01

    Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.

  15. Structure of thermal pair clouds around gamma-ray-emitting black holes

    NASA Technical Reports Server (NTRS)

    Liang, Edison P.

    1991-01-01

    Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.

  16. Take-home video for adult literacy

    NASA Astrophysics Data System (ADS)

    Yule, Valerie

    1996-01-01

    In the past, it has not been possible to "teach oneself to read" at home, because learners could not read the books to teach them. Videos and interactive compact discs have changed that situation and challenge current assumptions of the pedagogy of literacy. This article describes an experimental adult literacy project using video technology. The language used is English, but the basic concepts apply to any alphabetic or syllabic writing system. A half-hour cartoon video can help adults and adolescents with learning difficulties. Computer-animated cartoon graphics are attractive to look at, and simplify complex material in a clear, lively way. This video technique is also proving useful for distance learners, children, and learners of English as a second language. Methods and principles are to be extended using interactive compact discs.

  17. Brownian motion and thermophoresis effects on Peristaltic slip flow of a MHD nanofluid in a symmetric/asymmetric channel

    NASA Astrophysics Data System (ADS)

    Sucharitha, G.; Sreenadh, S.; Lakshminarayana, P.; Sushma, K.

    2017-11-01

    The slip and heat transfer effects on MHD peristaltic transport of a nanofluid in a non-uniform symmetric/asymmetric channel have studied under the assumptions of elongated wave length and negligible Reynolds number. From the simplified governing equations, the closed form solutions for velocity, stream function, temperature and concentrations are obtained. Also dual solutions are discussed for symmetric and asymmetric channel cases. The effects of important physical parameters are explained graphically. The slip parameter decreases the fluid velocity in middle of the channel whereas it increases the velocity at the channel walls. Temperature and concentration are decreasing and increasing functions of radiation parameter respectively. Moreover, velocity, temperature and concentrations are high in symmetric channel when compared with asymmetric channel.

  18. On the 'flip-flop' instability of Bondi-Hoyle accretion flows

    NASA Technical Reports Server (NTRS)

    Livio, Mario; Soker, Noam; Matsuda, Takuya; Anzer, Ulrich

    1991-01-01

    A simple physical interpretation is advanced by means of an analysis of the shock cone in the accretion flows past a compact object and with an examination of the accretion-line stability analyses. The stability of the conical shock is examined against small angular deflections with attention given to several simplifying assumptions. A line instability is identified in the Bondi-Hoyle accretion flows that leads to the formation of a large opening-angle shock. When the opening angle becomes large the instability becomes irregular oscillation. The analytical methodology is compared to previous numerical configurations that demonstrate different shock morphologies. The Bondi-Hoyle accretion onto a compact object is concluded to generate a range of nonlinear instabilities in both homogeneous and inhomogeneous cases with a quasiperiodic oscillation in the linear regime.

  19. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  20. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions

    PubMed Central

    Momeni, Babak; Xie, Li; Shou, Wenying

    2017-01-01

    Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics. DOI: http://dx.doi.org/10.7554/eLife.25051.001 PMID:28350295

  1. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  2. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  3. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability

    NASA Astrophysics Data System (ADS)

    Wu, Shanshan; Heberling, Matthew T.

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  4. Integrative approaches for modeling regulation and function of the respiratory system.

    PubMed

    Ben-Tal, Alona; Tawhai, Merryn H

    2013-01-01

    Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. Copyright © 2013 Wiley Periodicals, Inc.

  5. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  6. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability.

    PubMed

    Wu, Shanshan; Heberling, Matthew T

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  7. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  8. Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods

    NASA Technical Reports Server (NTRS)

    Yang, Yii-Ching

    1992-01-01

    Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.

  9. Dependence of elastic hadron collisions on impact parameter

    NASA Astrophysics Data System (ADS)

    Procházka, Jiří; Lokajíček, Miloš V.; Kundrát, Vojtěch

    2016-05-01

    Elastic proton-proton collisions represent probably the greatest ensemble of available measured data, the analysis of which may provide a large amount of new physical results concerning fundamental particles. It is, however, necessary to analyze first some conclusions concerning pp collisions and their interpretations differing fundamentally from our common macroscopic experience. It has been argued, e.g., that elastic hadron collisions have been more central than inelastic ones, even if any explanation of the existence of so different processes, i.e., elastic and inelastic (with hundreds of secondary particles) collisions, under the same conditions has not been given until now. The given conclusion has been based on a greater number of simplifying mathematical assumptions (already done in earlier calculations), without their influence on physical interpretation being analyzed and entitled; the corresponding influence has started to be studied in the approach based on the eikonal model. The possibility of a peripheral interpretation of elastic collisions will be demonstrated and the corresponding results summarized. The arguments will be given on why no preference may be given to the mentioned centrality against the standard peripheral behaviour. The corresponding discussion on the contemporary description of elastic hadronic collision in dependence on the impact parameter will be summarized and the justification of some important assumptions will be considered.

  10. Estimating causal contrasts involving intermediate variables in the presence of selection bias.

    PubMed

    Valeri, Linda; Coull, Brent A

    2016-11-20

    An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios

    PubMed Central

    Shen, Rui; Suuberg, Eric M.

    2016-01-01

    There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures. PMID:28090133

  12. Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach

    NASA Astrophysics Data System (ADS)

    van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.

    2018-03-01

    The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.

  13. Hard, harder, hardest: principal stratification, statistical identifiability, and the inherent difficulty of finding surrogate endpoints.

    PubMed

    Wolfson, Julian; Henn, Lisa

    2014-01-01

    In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging.

  14. Hard, harder, hardest: principal stratification, statistical identifiability, and the inherent difficulty of finding surrogate endpoints

    PubMed Central

    2014-01-01

    In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging. PMID:25342953

  15. Decision heuristic or preference? Attribute non-attendance in discrete choice problems.

    PubMed

    Heidenreich, Sebastian; Watson, Verity; Ryan, Mandy; Phimister, Euan

    2018-01-01

    This paper investigates if respondents' choice to not consider all characteristics of a multiattribute health service may represent preferences. Over the last decade, an increasing number of studies account for attribute non-attendance (ANA) when using discrete choice experiments to elicit individuals' preferences. Most studies assume such behaviour is a heuristic and therefore uninformative. This assumption may result in misleading welfare estimates if ANA reflects preferences. This is the first paper to assess if ANA is a heuristic or genuine preference without relying on respondents' self-stated motivation and the first study to explore this question within a health context. Based on findings from cognitive psychology, we expect that familiar respondents are less likely to use a decision heuristic to simplify choices than unfamiliar respondents. We employ a latent class model of discrete choice experiment data concerned with National Health Service managers' preferences for support services that assist with performance concerns. We present quantitative and qualitative evidence that in our study ANA mostly represents preferences. We also show that wrong assumptions about ANA result in inadequate welfare measures that can result in suboptimal policy advice. Future research should proceed with caution when assuming that ANA is a heuristic. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios.

    PubMed

    Shen, Rui; Suuberg, Eric M

    2016-02-01

    There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures.

  17. Transient competitive complexation in biological kinetic isotope fractionation explains non-steady isotopic effects: Theory and application to denitrification in soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggi, F.M.; Riley, W.J.

    2009-06-01

    The theoretical formulation of biological kinetic reactions in isotopic applications often assume first-order or Michaelis-Menten-Monod kinetics under the quasi-steady-state assumption to simplify the system kinetics. However, isotopic e ects have the same order of magnitude as the potential error introduced by these simpli cations. Both formulations lead to a constant fractionation factor which may yield incorrect estimations of the isotopic effect and a misleading interpretation of the isotopic signature of a reaction. We have analyzed the isotopic signature of denitri cation in biogeochemical soil systems by Menyailo and Hungate [2006], where high {sup 15}N{sub 2}O enrichment during N{sub 2}O productionmore » and inverse isotope fractionation during N{sub 2}O consumption could not be explained with first-order kinetics and the Rayleigh equation, or with the quasi-steady-state Michaelis-Menten-Monod kinetics. When the quasi-steady-state assumption was relaxed, transient Michaelis-Menten-Monod kinetics accurately reproduced the observations and aided in interpretation of experimental isotopic signatures. These results may imply a substantial revision in using the Rayleigh equation for interpretation of isotopic signatures and in modeling biological kinetic isotope fractionation with first-order kinetics or quasi-steady-state Michaelis-Menten-Monod kinetics.« less

  18. Complex Adaptive System Models and the Genetic Analysis of Plasma HDL-Cholesterol Concentration

    PubMed Central

    Rea, Thomas J.; Brown, Christine M.; Sing, Charles F.

    2006-01-01

    Despite remarkable advances in diagnosis and therapy, ischemic heart disease (IHD) remains a leading cause of morbidity and mortality in industrialized countries. Recent efforts to estimate the influence of genetic variation on IHD risk have focused on predicting individual plasma high-density lipoprotein cholesterol (HDL-C) concentration. Plasma HDL-C concentration (mg/dl), a quantitative risk factor for IHD, has a complex multifactorial etiology that involves the actions of many genes. Single gene variations may be necessary but are not individually sufficient to predict a statistically significant increase in risk of disease. The complexity of phenotype-genotype-environment relationships involved in determining plasma HDL-C concentration has challenged commonly held assumptions about genetic causation and has led to the question of which combination of variations, in which subset of genes, in which environmental strata of a particular population significantly improves our ability to predict high or low risk phenotypes. We document the limitations of inferences from genetic research based on commonly accepted biological models, consider how evidence for real-world dynamical interactions between HDL-C determinants challenges the simplifying assumptions implicit in traditional linear statistical genetic models, and conclude by considering research options for evaluating the utility of genetic information in predicting traits with complex etiologies. PMID:17146134

  19. Two's company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data.

    PubMed

    Giusti, Chad; Ghrist, Robert; Bassett, Danielle S

    2016-08-01

    The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.

  20. The evolutionary interplay of intergroup conflict and altruism in humans: a review of parochial altruism theory and prospects for its extension.

    PubMed

    Rusch, Hannes

    2014-11-07

    Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed 'parochial altruism', is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of 'parochial altruism'. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. An overview of self-consistent methods for fiber-reinforced composites

    NASA Technical Reports Server (NTRS)

    Gramoll, Kurt C.; Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    The Walker et al. (1989) self-consistent method to predict both the elastic and the inelastic effective material properties of composites is examined and compared with the results of other self-consistent and elastically based solutions. The elastic part of their method is shown to be identical to other self-consistent methods for non-dilute reinforced composite materials; they are the Hill (1965), Budiansky (1965), and Nemat-Nasser et al. (1982) derivations. A simplified form of the non-dilute self-consistent method is also derived. The predicted, elastic, effective material properties for fiber reinforced material using the Walker method was found to deviate from the elasticity solution for the v sub 31, K sub 12, and mu sub 31 material properties (fiber is in the 3 direction) especially at the larger volume fractions. Also, the prediction for the transverse shear modulus, mu sub 12, exceeds one of the accepted Hashin bounds. Only the longitudinal elastic modulus E sub 33 agrees with the elasticity solution. The differences between the Walker and the elasticity solutions are primarily due to the assumption used in the derivation of the self-consistent method, i.e., the strain fields in the inclusions and the matrix are assumed to remain constant, which is not a correct assumption for a high concentration of inclusions.

  2. One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Farhat, Hamidullah

    1990-01-01

    Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.

  3. Scope of inextensible frame hypothesis in local action analysis of spherical reservoirs

    NASA Astrophysics Data System (ADS)

    Vinogradov, Yu. I.

    2017-05-01

    Spherical reservoirs, as objects perfect with respect to their weight, are used in spacecrafts, where thin-walled elements are joined by frames into multifunction structures. The junctions are local, which results in origination of stress concentration regions and the corresponding rigidity problems. The thin-walled elements are reinforced by frame to decrease the stresses in them. To simplify the analysis of the mathematical model of common deformation of the shell (which is a mathematical idealization of the reservoir) and the frame, the assumption that the frame axial line is inextensible is used widely (in particular, in the manual literature). The unjustified use of this assumption significantly distorts the concept of the stress-strain state. In this paper, an example of a lens-shaped structure formed as two spherical shell segments connected by a frame of square profile is used to carry out a numerical comparative analysis of the solutions with and without the inextensible frame hypothesis taken into account. The scope of the hypothesis is shown depending on the structure geometric parameters and the load location degree. The obtained results can be used to determine the stress-strain state of the thin-walled structure with an a priori prescribed error, for example, in research and experimental design of aerospace systems.

  4. 76 FR 4901 - Submission for OMB Review; OMB Control No. 3090-0292; FFATA Subaward and Executive Compensation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... burden of this collection of information is accurate, and based on valid assumptions and methodology... assumptions and methodologies. The respondents requested that GSA and OMB publish additional information about... seen as an intelligence gathering, they recommended that OMB exempt primary recipients from having to...

  5. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    ERIC Educational Resources Information Center

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  6. Monitored Geologic Repository Project Description Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. M. Curry

    2001-01-30

    The primary objective of the Monitored Geologic Repository Project Description Document (PDD) is to allocate the functions, requirements, and assumptions to the systems at Level 5 of the Civilian Radioactive Waste Management System (CRWMS) architecture identified in Section 4. It provides traceability of the requirements to those contained in Section 3 of the ''Monitored Geologic Repository Requirements Document'' (MGR RD) (YMP 2000a) and other higher-level requirements documents. In addition, the PDD allocates design related assumptions to work products of non-design organizations. The document provides Monitored Geologic Repository (MGR) technical requirements in support of design and performance assessment in preparing formore » the Site Recommendation (SR) and License Application (LA) milestones. The technical requirements documented in the PDD are to be captured in the System Description Documents (SDDs) which address each of the systems at Level 5 of the CRWMS architecture. The design engineers obtain the technical requirements from the SDDs and by reference from the SDDs to the PDD. The design organizations and other organizations will obtain design related assumptions directly from the PDD. These organizations may establish additional assumptions for their individual activities, but such assumptions are not to conflict with the assumptions in the PDD. The PDD will serve as the primary link between the technical requirements captured in the SDDs and the design requirements captured in US Department of Energy (DOE) documents. The approved PDD is placed under Level 3 baseline control by the CRWMS Management and Operating Contractor (M and O) and the following portions of the PDD constitute the Technical Design Baseline for the MGR: the design characteristics listed in Table 1-1, the MGR Architecture (Section 4.1), the Technical Requirements (Section 5), and the Controlled Project Assumptions (Section 6).« less

  7. Influence of Beam Rotation on the Response of Cantilevered Flow Energy Harvesters Exploiting the Galloping Instability

    NASA Astrophysics Data System (ADS)

    Noel, James H.

    Energy harvesters are scalable devices that generate microwatt to milliwatt power levels by scavenging energy from their ambient natural environment. Applications of such devices are numerous, ranging from wireless sensing to biomedical implants. A particular type of energy harvester is a device which converts the momentum of an incident fluid flow into electrical output by using flow-induced instabilities such as galloping, flutter, vortex shedding and wake galloping. Galloping flow energy harvesters (GFEHs), which represent the core of this thesis, consist of a prismatic tip body mounted on a long, thin cantilever beam fixed on a rigid base. When the bluff body is placed such that its leading edge faces a moving fluid, the flow separates at the edges of the leading face causing shear layers to develop behind the bluff face. The shear layer interacts with the surface area of the afterbody. An asymmetric condition in the shear layers causes a net lift which incites motion. This causes the beam to oscillate periodically at or near the natural frequency of the system. The periodic strain developed near the base of the oscillating beam is then transformed into electricity by attaching a piezoelectric layer to either side of the beam surface. This thesis focuses on characterizing the influence of the rotation of the beam tip on the response and output power of GFEHs. Previous modeling efforts of GFEHs usually adopt two simplifying assumptions. First, it is assumed that the tip rotation of the beam is arbitrarily small and hence can be neglected. Second, it is assumed that the quasi-steady assumption of the aerodynamic force can be adopted even in the presence of tip rotation. Although the validity of these two assumptions becomes debatable in the presence of finite tip rotations, which are common to occur in GFEHs, none the previous research studies have systematically addressed the influence of finite tip rotations on the validity of the quasi-steady assumption and the response of cantilevered flow energy harvesters. To this end, the first objective of this thesis is to investigate the influence of the tip rotation on the output power of energy harvesters under the quasi-steady assumption. It is shown that neglecting the tip rotation will cause significant over-prediction of the output power even for small tip rotations. This thesis further assesses the validity of the quasi-steady assumption of the aerodynamic force in the presence of tip rotations using extensive experiments. It is shown that the quasi-steady model fails to accurately predict the behavior of square and trapezoidal prisms mounted on a cantilever beam and undergoing galloping oscillations. In particular, it is shown that the quasi-steady model under-predicts the amplitude of oscillation because it fails to consider the effect of body rotation. Careful analysis of the experimental data indicates that, unlike the quasi-steady aerodynamic lift force which depends only on the angle of attack, the effective aerodynamic curve is a function of both the angle of attack and the upstream flow velocity when the effects of body rotation are included. Nonetheless, although the quasi-steady assumption fails, the remarkable result is that the overall structure of the aerodynamic model remains intact, permitting the use of aerodynamic force surfaces to capture the influence of tip rotation. The second objective of this thesis is to present an approach to optimize the geometry of the bluff body to improve the performance of flow energy harvesters. It is shown that attaching a splitter plate to the afterbody of the prism can improve the output power of the device by as much as 60% for some cases. By increasing the reattachment angle of the shear layer and producing additional flow recirculation bubbles, the extension of the body using the splitter plate increases the useful range of the galloping instability for energy harvesting.

  8. Upscaling NZ-DNDC using a regression based meta-model to estimate direct N2O emissions from New Zealand grazed pastures.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaëlle E

    2016-01-01

    The availability of detailed input data frequently limits the application of process-based models at large scale. In this study, we produced simplified meta-models of the simulated nitrous oxide (N2O) emission factors (EF) using NZ-DNDC. Monte Carlo simulations were performed and the results investigated using multiple regression analysis to produce simplified meta-models of EF. These meta-models were then used to estimate direct N2O emissions from grazed pastures in New Zealand. New Zealand EF maps were generated using the meta-models with data from national scale soil maps. Direct emissions of N2O from grazed pasture were calculated by multiplying the EF map with a nitrogen (N) input map. Three meta-models were considered. Model 1 included only the soil organic carbon in the top 30cm (SOC30), Model 2 also included a clay content factor, and Model 3 added the interaction between SOC30 and clay. The median annual national direct N2O emissions from grazed pastures estimated using each model (assuming model errors were purely random) were: 9.6GgN (Model 1), 13.6GgN (Model 2), and 11.9GgN (Model 3). These values corresponded to an average EF of 0.53%, 0.75% and 0.63% respectively, while the corresponding average EF using New Zealand national inventory values was 0.67%. If the model error can be assumed to be independent for each pixel then the 95% confidence interval for the N2O emissions was of the order of ±0.4-0.7%, which is much lower than existing methods. However, spatial correlations in the model errors could invalidate this assumption. Under the extreme assumption that the model error for each pixel was identical the 95% confidence interval was approximately ±100-200%. Therefore further work is needed to assess the degree of spatial correlation in the model errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    PubMed

    Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O

    2016-06-06

    The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches.

  10. Second stop and sbottom searches with a stealth stop

    NASA Astrophysics Data System (ADS)

    Cheng, Hsin-Chia; Li, Lingfeng; Qin, Qin

    2016-11-01

    The top squarks (stops) may be the most wanted particles after the Higgs boson discovery. The searches for the lightest stop have put strong constraints on its mass. However, there is still a search gap in the low mass region if the spectrum of the stop and the lightest neutralino is compressed. In that case, it may be easier to look for the second stop since naturalness requires both stops to be close to the weak scale. The current experimental searches for the second stop are based on the simplified model approach with the decay modes {overset{˜ }{t}}_2to {overset{˜ }{t}}_1Z and {overset{˜ }{t}}_2to {overset{˜ }{t}}_1h . However, in a realistic supersymmetric spectrum there is always a sbottom lighter than the second stop, hence the decay patterns are usually more complicated than the simplified model assumptions. In particular, there are often large branching ratios of the decays {overset{˜ }{t}}_2to {overset{˜ }{b}}_1W and {overset{˜ }{b}}_1to {overset{˜ }{t}}_1W as long as they are open. The decay chains can be even more complex if there are intermediate states of additional charginos and neutralinos in the decays. By studying several MSSM benchmark models at the 14 TeV LHC, we point out the importance of the multi- W final states in the second stop and the sbottom searches, such as the same-sign dilepton and multilepton signals, aside from the traditional search modes. The observed same-sign dilepton excesses at LHC Run 1 and Run 2 may be explained by some of our benchmark models. We also suggest that the vector boson tagging and a new kinematic variable may help to suppress the backgrounds and increase the signal significance for some search channels. Due to the complex decay patterns and lack of the dominant decay channels, the best reaches likely require a combination of various search channels at the LHC for the second stop and the lightest sbottom.

  11. Data-driven and hybrid coastal morphological prediction methods for mesoscale forecasting

    NASA Astrophysics Data System (ADS)

    Reeve, Dominic E.; Karunarathna, Harshinie; Pan, Shunqi; Horrillo-Caraballo, Jose M.; Różyński, Grzegorz; Ranasinghe, Roshanka

    2016-03-01

    It is now common for coastal planning to anticipate changes anywhere from 70 to 100 years into the future. The process models developed and used for scheme design or for large-scale oceanography are currently inadequate for this task. This has prompted the development of a plethora of alternative methods. Some, such as reduced complexity or hybrid models simplify the governing equations retaining processes that are considered to govern observed morphological behaviour. The computational cost of these models is low and they have proven effective in exploring morphodynamic trends and improving our understanding of mesoscale behaviour. One drawback is that there is no generally agreed set of principles on which to make the simplifying assumptions and predictions can vary considerably between models. An alternative approach is data-driven techniques that are based entirely on analysis and extrapolation of observations. Here, we discuss the application of some of the better known and emerging methods in this category to argue that with the increasing availability of observations from coastal monitoring programmes and the development of more sophisticated statistical analysis techniques data-driven models provide a valuable addition to the armoury of methods available for mesoscale prediction. The continuation of established monitoring programmes is paramount, and those that provide contemporaneous records of the driving forces and the shoreline response are the most valuable in this regard. In the second part of the paper we discuss some recent research that combining some of the hybrid techniques with data analysis methods in order to synthesise a more consistent means of predicting mesoscale coastal morphological evolution. While encouraging in certain applications a universally applicable approach has yet to be found. The route to linking different model types is highlighted as a major challenge and requires further research to establish its viability. We argue that key elements of a successful solution will need to account for dependencies between driving parameters, (such as wave height and tide level), and be able to predict step changes in the configuration of coastal systems.

  12. Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?

    NASA Astrophysics Data System (ADS)

    Sprintsin, M.; Chen, J. M.

    2009-05-01

    The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit/shaded leaf separation on GPP prediction was evaluated accounting for the degree of the deviation of 3-dimensional leaf spatial distribution from the random case. More specifically, we compared and evaluated the behavior of both models showing the advantages of sunlit/shaded leaf separation strategy over a simplified big-leaf approach. Keywords: canopy photosynthesis, leaf area index, clumping index, remote sensing.

  13. Flow to partially penetrating wells in unconfined heterogeneous aquifers: Mean head and interpretation of pumping tests

    NASA Astrophysics Data System (ADS)

    Dagan, G.; Lessoff, S. C.

    2011-06-01

    A partially penetrating well of length Lw and radius Rw starts to pump at constant discharge Qw at t = 0 from an unconfined aquifer of thickness D. The aquifer is of random and stationary conductivity characterized by KG (geometric mean), σY2 (log conductivity variance), and I and Iv (the horizontal and vertical integral scales). The flow problem is solved under a few simplifying assumptions commonly adopted in the literature for homogeneous media: Rw/Lw ≪ 1, linearization of the free surface condition, and constant drainable porosity n. Additionally, it is assumed that Rw/I < 1 and Lw/Iv ≫ 1 (to simplify the well boundary conditions) and that a first-order approximation in σY2 (extended to finite σY2 on a conjectural basis) is adopted. The solution is obtained for the mean head field and the associated water table equation. The main result of the analysis is that the flow domain can be divided into three zones for : (1) the neighborhood of the well R ≪ I, where = (Qw/LwKA)h0(R, z, tKefuv/nD), with h0 being the zero-order solution pertaining to a homogeneous and isotropic aquifer, KA being the conductivity arithmetic mean, and Kefuv being the effective vertical conductivity in mean uniform flow, (2) an exterior zone R ⪆ I in which ?H? = (Qw/LwKefuh)h0(R?, z, tKefuv/nD), with Kefuh being the horizontal effective conductivity, and (3) an intermediate zone in which the solution requires a few numerical quadratures, not carried out here. The application to pumping tests reveals that identification of the aquifer parameters for homogeneous and anisotropic aquifers by commonly used methods can be applied for the drawdown measured in an observation well of length Low?Iv (to ensure exchange of space and ensemble head averages) in the second zone in order to identify Kefuh, Kefuv, and n. In contrast, the use of the drawdown in the well (first zone) leads to an overestimation of Kefuh by the factor KA/Kefuh.

  14. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks

    PubMed Central

    2017-01-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees. PMID:28545083

  15. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks.

    PubMed

    Klinkenberg, Don; Backer, Jantien A; Didelot, Xavier; Colijn, Caroline; Wallinga, Jacco

    2017-05-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees.

  16. Sediment transport under wave groups: Relative importance between nonlinear waveshape and nonlinear boundary layer streaming

    USGS Publications Warehouse

    Yu, X.; Hsu, T.-J.; Hanes, D.M.

    2010-01-01

    Sediment transport under nonlinear waves in a predominately sheet flow condition is investigated using a two-phase model. Specifically, we study the relative importance between the nonlinear waveshape and nonlinear boundary layer streaming on cross-shore sand transport. Terms in the governing equations because of the nonlinear boundary layer process are included in this one-dimensional vertical (1DV) model by simplifying the two-dimensional vertical (2DV) ensemble-averaged two-phase equations with the assumption that waves propagate without changing their form. The model is first driven by measured time series of near-bed flow velocity because of a wave group during the SISTEX99 large wave flume experiment and validated with the measured sand concentration in the sheet flow layer. Additional studies are then carried out by including and excluding the nonlinear boundary layer terms. It is found that for the grain diameter (0.24 mm) and high-velocity skewness wave condition considered here, nonlinear waveshape (e.g., skewness) is the dominant mechanism causing net onshore transport and nonlinear boundary layer streaming effect only causes an additional 36% onshore transport. However, for conditions of relatively low-wave skewness and a stronger offshore directed current, nonlinear boundary layer streaming plays a more critical role in determining the net transport. Numerical experiments further suggest that the nonlinear boundary layer streaming effect becomes increasingly important for finer grain. When the numerical model is driven by measured near-bed flow velocity in a more realistic surf zone setting, model results suggest nonlinear boundary layer processes may nearly double the onshore transport purely because of nonlinear waveshape. Copyright 2010 by the American Geophysical Union.

  17. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    PubMed Central

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  18. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  19. A SImplified method for Segregation Analysis (SISA) to determine penetrance and expression of a genetic variant in a family.

    PubMed

    Møller, Pål; Clark, Neal; Mæhle, Lovise

    2011-05-01

    A method for SImplified rapid Segregation Analysis (SISA) to assess penetrance and expression of genetic variants in pedigrees of any complexity is presented. For this purpose the probability for recombination between the variant and the gene is zero. An assumption is that the variant of undetermined significance (VUS) is introduced into the family once only. If so, all family members in between two members demonstrated to carry a VUS, are obligate carriers. Probabilities for cosegregation of disease and VUS by chance, penetrance, and expression, may be calculated. SISA return values do not include person identifiers and need no explicit informed consent. There will be no ethical complications in submitting SISA return values to central databases. Values for several families may be combined. Values for a family may be updated by the contributor. SISA is used to consider penetrance whenever sequencing demonstrates a VUS in the known cancer-predisposing genes. Any family structure at hand in a genetic clinic may be used. One may include an extended lineage in a family through demonstrating the same VUS in a distant relative, and thereby identifying all obligate carriers in between. Such extension is a way to escape the selection biases through expanding the families outside the clusters used to select the families. © 2011 Wiley-Liss, Inc.

  20. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  1. Highly efficient blue and warm white organic light-emitting diodes with a simplified structure

    NASA Astrophysics Data System (ADS)

    Li, Xiang-Long; Ouyang, Xinhua; Chen, Dongcheng; Cai, Xinyi; Liu, Ming; Ge, Ziyi; Cao, Yong; Su, Shi-Jian

    2016-03-01

    Two blue fluorescent emitters were utilized to construct simplified organic light-emitting diodes (OLEDs) and the remarkable difference in device performance was carefully illustrated. A maximum current efficiency of 4.84 cd A-1 (corresponding to a quantum efficiency of 4.29%) with a Commission Internationale de l’Eclairage (CIE) coordinate of (0.144, 0.127) was achieved by using N,N-diphenyl-4″-(1-phenyl-1H-benzo[d]imidazol-2-yl)-[1, 1‧:4‧, 1″-terphenyl]-4-amine (BBPI) as a non-doped emission layer of the simplified blue OLEDs without carrier-transport layers. In addition, simplified fluorescent/phosphorescent (F/P) hybrid warm white OLEDs without carrier-transport layers were fabricated by utilizing BBPI as (1) the blue emitter and (2) the host of a complementary yellow phosphorescent emitter (PO-01). A maximum current efficiency of 36.8 cd A-1 and a maximum power efficiency of 38.6 lm W-1 were achieved as a result of efficient energy transfer from the host to the guest and good triplet exciton confinement on the phosphorescent molecules. The blue and white OLEDs are among the most efficient simplified fluorescent blue and F/P hybrid white devices, and their performance is even comparable to that of most previously reported complicated multi-layer devices with carrier-transport layers.

  2. Temporal Clustering and Sequencing in Short-Term Memory and Episodic Memory

    ERIC Educational Resources Information Center

    Farrell, Simon

    2012-01-01

    A model of short-term memory and episodic memory is presented, with the core assumptions that (a) people parse their continuous experience into episodic clusters and (b) items are clustered together in memory as episodes by binding information within an episode to a common temporal context. Along with the additional assumption that information…

  3. Maximizing Research and Development Resources: Identifying and Testing "Load-Bearing Conditions" for Educational Technology Innovations

    ERIC Educational Resources Information Center

    Iriti, Jennifer; Bickel, William; Schunn, Christian; Stein, Mary Kay

    2016-01-01

    Education innovations often have a complicated set of assumptions about the contexts in which they are implemented, which may not be explicit. Education technology innovations in particular may have additional technical and cultural assumptions. As a result, education technology research and development efforts as well as scaling efforts can be…

  4. Change-in-ratio methods for estimating population size

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.

    2002-01-01

    Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.

  5. Simplified Interface to Complex Memory Hierarchies 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Michael; Ionkov, Latchesar; Williams, Sean

    2017-02-21

    Memory systems are expected to get evermore complicated in the coming years, and it isn't clear exactly what form that complexity will take. On the software side, a simple, flexible way of identifying and working with memory pools is needed. Additionally, most developers seek code portability and do not want to learn the intricacies of complex memory. Hence, we believe that a library for interacting with complex memory systems should expose two kinds of abstraction: First, a low-level, mechanism-based interface designed for the runtime or advanced user that wants complete control, with its focus on simplified representation but with allmore » decisions left to the caller. Second, a high-level, policy-based interface designed for ease of use for the application developer, in which we aim for best-practice decisions based on application intent. We have developed such a library, called SICM: Simplified Interface to Complex Memory.« less

  6. Lobatamide C: total synthesis, stereochemical assignment, preparation of simplified analogues, and V-ATPase inhibition studies.

    PubMed

    Shen, Ruichao; Lin, Cheng Ting; Bowman, Emma Jean; Bowman, Barry J; Porco, John A

    2003-07-02

    The total synthesis and stereochemical assignment of the potent antitumor macrolide lobatamide C, as well as synthesis of simplified lobatamide analogues, is reported. Cu(I)-mediated enamide formation methodology has been developed to prepare the highly unsaturated enamide side chain of the natural product and analogues. A key fragment coupling employs base-mediated esterification of a beta-hydroxy acid and a salicylate cyanomethyl ester. Three additional stereoisomers of lobatamide C have been prepared using related synthetic routes. The stereochemistry at C8, C11, and C15 of lobatamide C was assigned by comparison of stereoisomers and X-ray analysis of a crystalline derivative. Synthetic lobatamide C, stereoisomers, and simplified analogues have been evaluated for inhibition of bovine chromaffin granule membrane V-ATPase. The salicylate phenol, enamide NH, and ortho-substitution of the salicylate ester have been shown to be important for V-ATPase inhibitory activity.

  7. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  8. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  9. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, John R.; Stahara, Stephen S.

    1985-01-01

    A bow wave was previously observed in the solar wind upstream of each of the first six planets. The observed properties of these bow waves and the associated plasma flows are outlined, and those features identified that can be described by a continuum magnetohydrodynamic flow theory. An account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies is provided. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared.

  10. Shear viscosity in monatomic liquids: a simple mode-coupling approach

    NASA Astrophysics Data System (ADS)

    Balucani, Umberto

    The value of the shear-viscosity coefficient in fluids is controlled by the dynamical processes affecting the time decay of the associated Green-Kubo integrand, the stress autocorrelation function (SACF). These processes are investigated in monatomic liquids by means of a microscopic approach with a minimum use of phenomenological assumptions. In particular, mode-coupling effects (responsible for the presence in the SACF of a long-lasting 'tail') are accounted for by a simplified approach where the only requirement is knowledge of the structural properties. The theory readily yields quantitative predictions in its domain of validity, which comprises ordinary and moderately supercooled 'simple' liquids. The framework is applied to liquid Ar and Rb near their melting points, and quite satisfactory agreement with the simulation data is found for both the details of the SACF and the value of the shear-viscosity coefficient.

  11. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, J. R.; Stahara, S. S.

    1983-01-01

    A bow wave was previously observed in the solar wind upstream of each of the first six planets. The observed properties of these bow waves and the associated plasma flows are outlined, and those features identified that can be described by a continuum magnetohydrodynamic flow theory. An account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies is provided. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared.

  12. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  13. Deflection Shape Reconstructions of a Rotating Five-blade Helicopter Rotor from TLDV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fioretti, A.; Castellini, P.; Tomasini, E. P.

    2010-05-28

    Helicopters are aircraft machines which are subjected to high level of vibrations, mainly due to spinning rotors. These are made of two or more blades attached by hinges to a central hub, which can make the dynamic behaviour difficult to study. However, they share some common dynamic properties with the ones expected in bladed discs, thereby the analytical modelling of rotors can be performed using some assumptions as the ones adopted for the bladed discs. This paper presents results of a vibrations study performed on a scaled helicopter rotor model which was rotating at a fix rotational speed and excitedmore » by an air jet. A simplified analytical model of that rotor was also produced to help the identifications of the vibration patterns measured using a single point tracking-SLDV measurement method.« less

  14. Heat transfer evaluation in a plasma core reactor

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Smith, T. M.; Stoenescu, M. L.

    1976-01-01

    Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.

  15. Numeric calculation of unsteady forces over thin pointed wings in sonic flow

    NASA Technical Reports Server (NTRS)

    Kimble, K. R.; Wu, J. M.

    1975-01-01

    A fast and reasonably accurate numerical procedure is proposed for the solution of a simplified unsteady transonic equation. The approach described takes into account many of the effects of the steady flow field. The resulting accuracy is within a few per cent and can be carried out on a computer in less than one minute per case (one frequency and one mode of oscillation). The problem concerns a rigid pointed wing which performs harmonic pitching oscillations of small amplitude in a steady uniform transonic flow. Wake influence is ignored and shocks must be weak. It is shown that the method is more flexible than the transonic box method proposed by Rodemich and Andrew (1965) in that it can easily account for variable local Mach number and rather arbitrary planform so long as the basic assumptions are fulfilled.

  16. Experimental Investigation of Wind-Tunnel Interference on the Downwash Behind an Airfoil

    NASA Technical Reports Server (NTRS)

    Silverstein, Abe; Katzoff, S

    1937-01-01

    The interference of the wind-tunnel boundaries on the downwash behind an airfoil has been experimentally investigated and the results have been compared with the available theoretical results for open-throat wind tunnels. As in previous studies, the simplified theoretical treatment that assumes the test section to be an infinite free jet has been shown to be satisfactory at the lifting line. The experimental results, however, show that this assumption may lead to erroneous conclusions regarding the corrections to be applied to the downwash in the region behind the airfoil where the tail surfaces are normally located. The results of a theory based on the more accurate concept of the open-jet wind tunnel as a finite length of free jet provided with a closed exit passage are in good qualitative agreement with the experimental results.

  17. Sneaky light stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifert, Till; Nachman, Benjamin

    2015-02-20

    A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less

  18. Sneaky light stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifert, Till; Nachman, Benjamin

    2015-04-01

    A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less

  19. Electromagnetic reflection from multi-layered snow models

    NASA Technical Reports Server (NTRS)

    Linlor, W. I.; Jiracek, G. R.

    1975-01-01

    The remote sensing of snow-pack characteristics with surface installations or an airborne system could have important applications in water-resource management and flood prediction. To derive some insight into such applications, the electromagnetic response of multilayered snow models is analyzed in this paper. Normally incident plane waves at frequencies ranging from 1 MHz to 10 GHz are assumed, and amplitude reflection coefficients are calculated for models having various snow-layer combinations, including ice layers. Layers are defined by thickness, permittivity, and conductivity; the electrical parameters are constant or prescribed functions of frequency. To illustrate the effect of various layering combinations, results are given in the form of curves of amplitude reflection coefficients versus frequency for a variety of models. Under simplifying assumptions, the snow thickness and effective dielectric constant can be estimated from the variations of reflection coefficient as a function of frequency.

  20. Modeling synchronous voltage source converters in transmission system planning studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosterev, D.N.

    1997-04-01

    A Voltage Source Converter (VSC) can be beneficial to power utilities in many ways. To evaluate the VSC performance in potential applications, the device has to be represented appropriately in planning studies. This paper addresses VSC modeling for EMTP, powerflow, and transient stability studies. First, the VSC operating principles are overviewed, and the device model for EMTP studies is presented. The ratings of VSC components are discussed, and the device operating characteristics are derived based on these ratings. A powerflow model is presented and various control modes are proposed. A detailed stability model is developed, and its step-by-step initialization proceduremore » is described. A simplified stability model is also derived under stated assumptions. Finally, validation studies are performed to demonstrate performance of developed stability models and to compare it with EMTP simulations.« less

  1. Prediction of interior noise due to random acoustic or turbulent boundary layer excitation using statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.

    1990-01-01

    The feasibility of predicting interior noise due to random acoustic or turbulent boundary layer excitation was investigated in experiments in which a statistical energy analysis model (VAPEPS) was used to analyze measurements of the acceleration response and sound transmission of flat aluminum, lucite, and graphite/epoxy plates exposed to random acoustic or turbulent boundary layer excitation. The noise reduction of the plate, when backed by a shallow cavity and excited by a turbulent boundary layer, was predicted using a simplified theory based on the assumption of adiabatic compression of the fluid in the cavity. The predicted plate acceleration response was used as input in the noise reduction prediction. Reasonable agreement was found between the predictions and the measured noise reduction in the frequency range 315-1000 Hz.

  2. Reducing junk radiation and eccentricity in binary-black-hole initial data

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Pfeiffer, Harald; Brown, Duncan; Lindblom, Lee; Scheel, Mark; Kidder, Lawrence

    2007-04-01

    Numerical simulations of binary-black-hole (BBH) collisions require initial data that satisfy the Einstein constraint equations. Several well-known methods generate constraint-satisfying BBH data, but the commonly-used simplifying assumptions lead to undesirable effects. BBH data typically assume a conformally flat spatial metric; this leads to an initial pulse of unphysical ``junk'' gravitational radiation. Also, the initial radial velocity of the holes is often neglected; this can lead to significant eccentricity in the holes' trajectories. This talk will discuss efforts to reduce these effects by constructing and evolving generalizations of the BBH initial data of Cook and Pfeiffer (2004). By giving the holes a small radial velocity, the eccentricity can be greatly reduced (although the emitted waves are largely unaffected). The junk radiation for flat and non-flat conformal metrics will also be compared.

  3. Improved Temperature Dynamic Model of Turbine Subcomponents for Facilitation of Generalized Tip Clearance Control

    NASA Technical Reports Server (NTRS)

    Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo

    2004-01-01

    This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.

  4. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less

  5. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  6. Transport Phenomena During Equiaxed Solidification of Alloys

    NASA Technical Reports Server (NTRS)

    Beckermann, C.; deGroh, H. C., III

    1997-01-01

    Recent progress in modeling of transport phenomena during dendritic alloy solidification is reviewed. Starting from the basic theorems of volume averaging, a general multiphase modeling framework is outlined. This framework allows for the incorporation of a variety of microscale phenomena in the macroscopic transport equations. For the case of diffusion dominated solidification, a simplified set of model equations is examined in detail and validated through comparisons with numerous experimental data for both columnar and equiaxed dendritic growth. This provides a critical assessment of the various model assumptions. Models that include melt flow and solid phase transport are also discussed, although their validation is still at an early stage. Several numerical results are presented that illustrate some of the profound effects of convective transport on the final compositional and structural characteristics of a solidified part. Important issues that deserve continuing attention are identified.

  7. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  8. The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere

    NASA Astrophysics Data System (ADS)

    Chen, X.; Lin, S. J.; Harris, L.

    2017-12-01

    Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.

  9. Camera System MTF: combining optic with detector

    NASA Astrophysics Data System (ADS)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  10. Analytic solutions for Long's equation and its generalization

    NASA Astrophysics Data System (ADS)

    Humi, Mayer

    2017-12-01

    Two-dimensional, steady-state, stratified, isothermal atmospheric flow over topography is governed by Long's equation. Numerical solutions of this equation were derived and used by several authors. In particular, these solutions were applied extensively to analyze the experimental observations of gravity waves. In the first part of this paper we derive an extension of this equation to non-isothermal flows. Then we devise a transformation that simplifies this equation. We show that this simplified equation admits solitonic-type solutions in addition to regular gravity waves. These new analytical solutions provide new insights into the propagation and amplitude of gravity waves over topography.

  11. Lift Recovery for AFC-Enabled High Lift System

    NASA Technical Reports Server (NTRS)

    Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Gissen, Abraham N.; Whalen, Edward A.

    2017-01-01

    This project is a continuation of the NASA AFC-Enabled Simplified High-Lift System Integration Study contract (NNL10AA05B) performed by Boeing under the Fixed Wing Project. This task is motivated by the simplified high-lift system, which is advantageous due to the simpler mechanical system, reduced actuation power and lower maintenance costs. Additionally, the removal of the flap track fairings associated with conventional high-lift systems renders a more efficient aerodynamic configuration. Potentially, these benefits translate to a approx. 2.25% net reduction in fuel burn for a twin-engine, long-range airplane.

  12. Outline of cost-benefit analysis and a case study

    NASA Technical Reports Server (NTRS)

    Kellizy, A.

    1978-01-01

    The methodology of cost-benefit analysis is reviewed and a case study involving solar cell technology is presented. Emphasis is placed on simplifying the technique in order to permit a technical person not trained in economics to undertake a cost-benefit study comparing alternative approaches to a given problem. The role of economic analysis in management decision making is discussed. In simplifying the methodology it was necessary to restrict the scope and applicability of this report. Additional considerations and constraints are outlined. Examples are worked out to demonstrate the principles. A computer program which performs the computational aspects appears in the appendix.

  13. Simplified aerodynamic analysis of the cyclogiro rotating wing system

    NASA Technical Reports Server (NTRS)

    Wheatley, John B

    1930-01-01

    A simplified aerodynamic theory of the cyclogiro rotating wing is presented herein. In addition, examples have been calculated showing the effect on the rotor characteristics of varying the design parameters of the rotor. A performance prediction, on the basis of the theory here developed, is appended, showing the performance to be expected of a machine employing this system of sustentation. The aerodynamic principles of the cyclogiro are sound; hovering flight, vertical climb, and a reasonable forward speed may be obtained with a normal expenditure of power. Auto rotation in a gliding descent is available in the event of a power-plant failure.

  14. Impact and Cost-effectiveness of 3 Doses of 9-Valent Human Papillomavirus (HPV) Vaccine Among US Females Previously Vaccinated With 4-Valent HPV Vaccine.

    PubMed

    Chesson, Harrell W; Laprise, Jean-François; Brisson, Marc; Markowitz, Lauri E

    2016-06-01

    We estimated the potential impact and cost-effectiveness of providing 3-doses of nonavalent human papillomavirus (HPV) vaccine (9vHPV) to females aged 13-18 years who had previously completed a series of quadrivalent HPV vaccine (4vHPV), a strategy we refer to as "additional 9vHPV vaccination." We used 2 distinct models: (1) the simplified model, which is among the most basic of the published dynamic HPV models, and (2) the US HPV-ADVISE model, a complex, stochastic, individual-based transmission-dynamic model. When assuming no 4vHPV cross-protection, the incremental cost per quality-adjusted life-year (QALY) gained by additional 9vHPV vaccination was $146 200 in the simplified model and $108 200 in the US HPV-ADVISE model ($191 800 when assuming 4vHPV cross-protection). In 1-way sensitivity analyses in the scenario of no 4vHPV cross-protection, the simplified model results ranged from $70 300 to $182 000, and the US HPV-ADVISE model results ranged from $97 600 to $118 900. The average cost per QALY gained by additional 9vHPV vaccination exceeded $100 000 in both models. However, the results varied considerably in sensitivity and uncertainty analyses. Additional 9vHPV vaccination is likely not as efficient as many other potential HPV vaccination strategies, such as increasing primary 9vHPV vaccine coverage. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  15. Managing heteroscedasticity in general linear models.

    PubMed

    Rosopa, Patrick J; Schaffer, Meline M; Schroeder, Amber N

    2013-09-01

    Heteroscedasticity refers to a phenomenon where data violate a statistical assumption. This assumption is known as homoscedasticity. When the homoscedasticity assumption is violated, this can lead to increased Type I error rates or decreased statistical power. Because this can adversely affect substantive conclusions, the failure to detect and manage heteroscedasticity could have serious implications for theory, research, and practice. In addition, heteroscedasticity is not uncommon in the behavioral and social sciences. Thus, in the current article, we synthesize extant literature in applied psychology, econometrics, quantitative psychology, and statistics, and we offer recommendations for researchers and practitioners regarding available procedures for detecting heteroscedasticity and mitigating its effects. In addition to discussing the strengths and weaknesses of various procedures and comparing them in terms of existing simulation results, we describe a 3-step data-analytic process for detecting and managing heteroscedasticity: (a) fitting a model based on theory and saving residuals, (b) the analysis of residuals, and (c) statistical inferences (e.g., hypothesis tests and confidence intervals) involving parameter estimates. We also demonstrate this data-analytic process using an illustrative example. Overall, detecting violations of the homoscedasticity assumption and mitigating its biasing effects can strengthen the validity of inferences from behavioral and social science data.

  16. Patient perspectives on de-simplifying their single-tablet co-formulated antiretroviral therapy for societal cost savings.

    PubMed

    Krentz, H B; Campbell, S; Gill, V C; Gill, M J

    2018-04-01

    The incremental costs of expanding antiretroviral (ARV) drug treatment to all HIV-infected patients are substantial, so cost-saving initiatives are important. Our objectives were to determine the acceptability and financial impact of de-simplifying (i.e. switching) more expensive single-tablet formulations (STFs) to less expensive generic-based multi-tablet components. We determined physician and patient perceptions and acceptance of STF de-simplification within the context of a publicly funded ARV budget. Programme costs were calculated for patients on ARVs followed at the Southern Alberta Clinic, Canada during 2016 (Cdn$). We focused on patients receiving Triumeq® and determined the savings if patients de-simplified to eligible generic co-formulations. We surveyed all prescribing physicians and a convenience sample of patients taking Triumeq® to see if, for budgetary purposes, they felt that de-simplification would be acceptable. Of 1780 patients receiving ARVs, 62% (n = 1038) were on STF; 58% (n = 607) of patients on STF were on Triumeq®. The total annual cost of ARVs was $26 222 760. The cost for Triumeq® was $8 292 600. If every patient on Triumeq® switched to generic abacavir/lamivudine and Tivicay® (dolutegravir), total costs would decrease by $4 325 040. All physicians (n = 13) felt that de-simplifying could be safely achieved. Forty-eight per cent of 221 patients surveyed were agreeable to de-simplifying for altruistic reasons, 27% said no, and 25% said maybe. De-simplifying Triumeq® generates large cost savings. Additional savings could be achieved by de-simplifying other STFs. Both physicians and patients agreed that selective de-simplification was acceptable; however, it may not be acceptable to every patient. Monitoring the medical and cost impacts of de-simplification strategies seems warranted. © 2018 British HIV Association.

  17. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  18. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  19. Simplified efficient phosphorescent organic light-emitting diodes by organic vapor phase deposition

    NASA Astrophysics Data System (ADS)

    Pfeiffer, P.; Beckmann, C.; Stümmler, D.; Sanders, S.; Simkus, G.; Heuken, M.; Vescan, A.; Kalisch, H.

    2017-12-01

    The most efficient phosphorescent organic light-emitting diodes (OLEDs) are comprised of complex stacks with numerous organic layers. State-of-the-art phosphorescent OLEDs make use of blocking layers to confine charge carriers and excitons. On the other hand, simplified OLEDs consisting of only three organic materials have shown unexpectedly high efficiency when first introduced. This was attributed to superior energy level matching and suppressed external quantum efficiency (EQE) roll-off. In this work, we study simplified OLED stacks, manufactured by organic vapor phase deposition, with a focus on charge balance, turn-on voltage (Von), and efficiency. To prevent electrons from leaking through the device, we implemented a compositionally graded emission layer. By grading the emitter with the hole transport material, charge confinement is enabled without additional blocking layers. Our best performing organic stack is composed of only three organic materials in two layers including the emitter Ir(ppy)3 and yields a Von of 2.5 V (>1 cd/m2) and an EQE of 13% at 3000 cd/m2 without the use of any additional light extraction techniques. Changes in the charge balance, due to barrier tuning or adjustments in the grading parameters and layer thicknesses, are clearly visible in the current density-voltage-luminance (J-V-L) measurements. As charge injection at the electrodes and organic interfaces is of great interest but difficult to investigate in complex device structures, we believe that our simplified organic stack is not only a potent alternative to complex state-of-the-art OLEDs but also a well suited test vehicle for experimental studies focusing on the modification of the electrode-organic semiconductor interface.

  20. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  1. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    NASA Astrophysics Data System (ADS)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual sub-models, such as those mentioned above, allow us to answer basic research questions without the parameterization necessary in higher resolution models. Thus, this type of model offers an alternative to higher-resolution models.

  2. Coupled Mechanical and Thermal Modeling of Frictional Melt Injection to Constrain Physical Conditions of the Earthquake Source Region

    NASA Astrophysics Data System (ADS)

    Sawyer, W.; Resor, P. G.

    2016-12-01

    Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress and strength drop, as well as slip weakening distance and wall rock stiffness. These studies, however, have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a modified analytical approximation of injection vein formation based on a dike intrusion model we find that the timescales of quenching and flow propagation are similar for a composite set of injection veins compiled from the Asbestos Mountain Fault, USA (Rowe et al., 2012), Gole Larghe Fault Zone, Italy (Griffith et al., 2012) and the Fort Foster Brittle Zone. This indicates a complex, dynamic process whose behavior is not fully captured by the current approach. To assess the applicability of the simplifying assumptions of the dike model when applied to injection veins we employ a finite-element time-dependent model of injection vein formation. This model couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. The final geometry of many injection veins is unaffected by the inclusion of these processes. However, some injection veins are found to be flow limited, with a final geometry reflecting cooling of the vein before it reaches an elastic equilibrium with the wall rock. In these cases, numerical results are significantly different from the dike model, and two basic assumptions of the dike model, self-similar growth and a uniform pressure gradient, are shown to be false. Additionally, we apply the finite-element model to provide two new constraints on the Fort Foster coseismic environment: a lower limit on the initial melt temperature of 1400 *C, and either significant coseismic wall rock softening or high transient tensile stress.

  3. Compact energy dispersive X-ray microdiffractometer for diagnosis of neoplastic tissues

    NASA Astrophysics Data System (ADS)

    Sosa, C.; Malezan, A.; Poletti, M. E.; Perez, R. D.

    2017-08-01

    An energy dispersive X-ray microdiffractometer with capillary optics has been developed for characterizing breast cancer. The employment of low divergence capillary optics helps to reduce the setup size to a few centimeters, while providing a lateral spatial resolution of 100 μm. The system angular calibration and momentum transfer resolution were assessed by a detailed study of a polycrystalline reference material. The performance of the system was tested by means of the analysis of tissue-equivalent samples previously characterized by conventional X-ray diffraction. In addition, a simplified correction model for an appropriate comparison of the diffraction spectra was developed and validated. Finally, the system was employed to evaluate normal and neoplastic human breast samples, in order to determine their X-ray scatter signatures. The initial results indicate that the use of this compact energy dispersive X-ray microdiffractometer combined with a simplified correction procedure is able to provide additional information to breast cancer diagnosis.

  4. Thermal Characterization and Flammability of Structural Epoxy Adhesive and Carbon/Epoxy Composite with Environmental and Chemical Degradation (Postprint)

    DTIC Science & Technology

    2012-01-01

    this study). TGA scans show the thermal degradation of carbon/ epoxy composite by fuel additive at room temperature. Through Microscale Combustion...concerns regarding the durability of structural epoxy adhesive contaminated by hydraulic fluid or fuel additive , under simplified test conditions (no...higher than room tem- perature) or fuel additive (at all temperatures of this study). TGA scans show the thermal degradation of carbon/ epoxy composite

  5. Multi-Mode 3D Kirchhoff Migration of Receiver Functions at Continental Scale With Applications to USArray

    NASA Astrophysics Data System (ADS)

    Millet, F.; Bodin, T.; Rondenay, S.

    2017-12-01

    The teleseismic scattered seismic wavefield contains valuable information about heterogeneities and discontinuities inside the Earth. By using fast Receiver Function (RF) migration techniques such as classic Common Conversion Point (CCP) stacks, one can easily interpret structural features down to a few hundred kilometers in the mantle. However, strong simplifying 1D assumptions limit the scope of these methods to structures that are relatively planar and sub-horizontal at local-to-regional scales, such as the Lithosphere-Asthenosphere Boundary and the Mantle Transition Zone discontinuities. Other more robust 2D and 2.5D methods rely on fewer assumptions but require considerable, sometime prohibitive, computation time. Following the ideas of Cheng (2017), we have implemented a simple fully 3D Prestack Kirchhoff RF migration scheme which uses the FM3D fast Eikonal solver to compute travel times and scattering angles. The method accounts for 3D elastic point scattering and includes free surface multiples, resulting in enhanced images of laterally varying dipping structures, such as subducted slabs. The method is tested for subduction structures using 2.5D synthetics generated with Raysum and 3D synthetics generated with specfem3D. Results show that dip angles, depths and lateral variations can be recovered almost perfectly. The approach is ideally suited for applications to dense regional datasets, including those collected across the Cascadia and Alaska subduction zones by USArray.

  6. Fully Bayesian tests of neutrality using genealogical summary statistics.

    PubMed

    Drummond, Alexei J; Suchard, Marc A

    2008-10-31

    Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  7. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  8. A Hydrodynamic Model of Alfvénic Wave Heating in a Coronal Loop and Its Chromospheric Footpoints

    NASA Astrophysics Data System (ADS)

    Reep, Jeffrey W.; Russell, Alexander J. B.; Tarr, Lucas A.; Leake, James E.

    2018-02-01

    Alfvénic waves have been proposed as an important energy transport mechanism in coronal loops, capable of delivering energy to both the corona and chromosphere and giving rise to many observed features of flaring and quiescent regions. In previous work, we established that resistive dissipation of waves (ambipolar diffusion) can drive strong chromospheric heating and evaporation, capable of producing flaring signatures. However, that model was based on a simplified assumption that the waves propagate instantly to the chromosphere, an assumption that the current work removes. Via a ray-tracing method, we have implemented traveling waves in a field-aligned hydrodynamic simulation that dissipate locally as they propagate along the field line. We compare this method to and validate against the magnetohydrodynamics code Lare3D. We then examine the importance of travel times to the dynamics of the loop evolution, finding that (1) the ionization level of the plasma plays a critical role in determining the location and rate at which waves dissipate; (2) long duration waves effectively bore a hole into the chromosphere, allowing subsequent waves to penetrate deeper than previously expected, unlike an electron beam whose energy deposition rises in height as evaporation reduces the mean-free paths of the electrons; and (3) the dissipation of these waves drives a pressure front that propagates to deeper depths, unlike energy deposition by an electron beam.

  9. Separating intrinsic from extrinsic fluctuations in dynamic biological systems

    PubMed Central

    Paulsson, Johan

    2011-01-01

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172

  10. Separating intrinsic from extrinsic fluctuations in dynamic biological systems.

    PubMed

    Hilfinger, Andreas; Paulsson, Johan

    2011-07-19

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.

  11. Calculation of Disease Dynamics in a Population of Households

    PubMed Central

    Ross, Joshua V.; House, Thomas; Keeling, Matt J.

    2010-01-01

    Early mathematical representations of infectious disease dynamics assumed a single, large, homogeneously mixing population. Over the past decade there has been growing interest in models consisting of multiple smaller subpopulations (households, workplaces, schools, communities), with the natural assumption of strong homogeneous mixing within each subpopulation, and weaker transmission between subpopulations. Here we consider a model of SIRS (susceptible-infectious-recovered-susceptible) infection dynamics in a very large (assumed infinite) population of households, with the simplifying assumption that each household is of the same size (although all methods may be extended to a population with a heterogeneous distribution of household sizes). For this households model we present efficient methods for studying several quantities of epidemiological interest: (i) the threshold for invasion; (ii) the early growth rate; (iii) the household offspring distribution; (iv) the endemic prevalence of infection; and (v) the transient dynamics of the process. We utilize these methods to explore a wide region of parameter space appropriate for human infectious diseases. We then extend these results to consider the effects of more realistic gamma-distributed infectious periods. We discuss how all these results differ from standard homogeneous-mixing models and assess the implications for the invasion, transmission and persistence of infection. The computational efficiency of the methodology presented here will hopefully aid in the parameterisation of structured models and in the evaluation of appropriate responses for future disease outbreaks. PMID:20305791

  12. Prototyping and validating requirements of radiation and nuclear emergency plan simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamid, AHA., E-mail: amyhamijah@nm.gov.my; Faculty of Computing, Universiti Teknologi Malaysia; Rozan, MZA.

    2015-04-29

    Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation wasmore » carried on to endorse the correctness of the model itself against the stakeholder’s intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties’ absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.« less

  13. Prototyping and validating requirements of radiation and nuclear emergency plan simulator

    NASA Astrophysics Data System (ADS)

    Hamid, AHA.; Rozan, MZA.; Ibrahim, R.; Deris, S.; Selamat, A.

    2015-04-01

    Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation was carried on to endorse the correctness of the model itself against the stakeholder's intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties' absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.

  14. Calculating the trap density of states in organic field-effect transistors from experiment: A comparison of different methods

    NASA Astrophysics Data System (ADS)

    Kalb, Wolfgang L.; Batlogg, Bertram

    2010-01-01

    The spectral density of localized states in the band gap of pentacene (trap DOS) was determined with a pentacene-based thin-film transistor from measurements of the temperature dependence and gate-voltage dependence of the contact-corrected field-effect conductivity. Several analytical methods to calculate the trap DOS from the measured data were used to clarify, if the different methods lead to comparable results. We also used computer simulations to further test the results from the analytical methods. Most methods predict a trap DOS close to the valence-band edge that can be very well approximated by a single exponential function with a slope in the range of 50-60 meV and a trap density at the valence-band edge of ≈2×1021eV-1cm-3 . Interestingly, the trap DOS is always slightly steeper than exponential. An important finding is that the choice of the method to calculate the trap DOS from the measured data can have a considerable effect on the final result. We identify two specific simplifying assumptions that lead to significant errors in the trap DOS. The temperature dependence of the band mobility should generally not be neglected. Moreover, the assumption of a constant effective accumulation-layer thickness leads to a significant underestimation of the slope of the trap DOS.

  15. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  16. Temporal Overlap in the Linguistic Processing of Successive Words in Reading: Reply to Pollatsek, Reichle, and Rayner (2006a)

    ERIC Educational Resources Information Center

    Inhoff, Albrecht W.; Radach, Ralph; Eiter, Brianna

    2006-01-01

    A. Pollatsek, E. D. Reichle, and K. Rayner argue that the critical findings in A. W. Inhoff, B. M. Eiter, and R. Radach are in general agreement with core assumptions of sequential attention shift models if additional assumptions and facts are considered. The current authors critically discuss the hypothesized time line of processing and indicate…

  17. Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.

    2017-02-01

    In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.

  18. The Prediction of Broadband Shock-Associated Noise Including Propagation Effects

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2011-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock- associated noise (BBSAN) that directly incorporates the vector Green's function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) as the mean flow. The vector Green's function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation. An adjoint vector Green's function solver is implemented to determine the vector Green's function based on a locally parallel mean flow at streamwise locations of the SRANS solution. However, the developed acoustic analogy could easily be based on any adjoint vector Green's function solver, such as one that makes no assumptions about the mean flow. The newly developed acoustic analogy can be simplified to one that uses the Green's function associated with the Helmholtz equation, which is consistent with the formulation of Morris and Miller (AIAAJ 2010). A large number of predictions are generated using three different nozzles over a wide range of fully expanded Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise labs. In addition, two models for the so-called 'fine-scale' mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include the propagation effects, especially in the upstream direction of the jet.

  19. Net growth rate of continuum heterogeneous biofilms with inhibition kinetics.

    PubMed

    Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B

    2018-01-01

    Biofilm systems can be modeled using a variety of analytical and numerical approaches, usually by making simplifying assumptions regarding biofilm heterogeneity and activity as well as effective diffusivity. Inhibition kinetics, albeit common in experimental systems, are rarely considered and analytical approaches are either lacking or consider effective diffusivity of the substrate and the biofilm density to remain constant. To address this obvious knowledge gap an analytical procedure to estimate the effectiveness factor (dimensionless substrate mass flux at the biofilm-fluid interface) was developed for a continuum heterogeneous biofilm with multiple limiting-substrate Monod kinetics to different types of inhibition kinetics. The simple perturbation technique, previously validated to quantify biofilm activity, was applied to systems where either the substrate or the inhibitor is the limiting component, and cases where the inhibitor is a reaction product or the substrate also acts as the inhibitor. Explicit analytical equations are presented for the effectiveness factor estimation and, therefore, the calculation of biomass growth rate or limiting substrate/inhibitor consumption rate, for a given biofilm thickness. The robustness of the new biofilm model was tested using kinetic parameters experimentally determined for the growth of Pseudomonas putida CCRC 14365 on phenol. Several additional cases have been analyzed, including examples where the effectiveness factor can reach values greater than unity, characteristic of systems with inhibition kinetics. Criteria to establish when the effectiveness factor can reach values greater than unity in each of the cases studied are also presented.

  20. Interplay between shape and roughness in early-stage microcapillary imbibition.

    PubMed

    Girardo, Salvatore; Palpacelli, Silvia; De Maio, Alessandro; Cingolani, Roberto; Succi, Sauro; Pisignano, Dario

    2012-02-07

    Flows in microcapillaries and associated imbibition phenomena play a major role across a wide spectrum of practical applications, from oil recovery to inkjet printing and from absorption in porous materials and water transport in trees to biofluidic phenomena in biomedical devices. Early investigations of spontaneous imbibition in capillaries led to the observation of a universal scaling behavior, known as the Lucas-Washburn (LW) law. The LW allows abstraction of many real-life effects, such as the inertia of the fluid, irregularities in the wall geometry, and the finite density of the vacuum phase (gas or vapor) within the channel. Such simplifying assumptions set a constraint on the design of modern microfluidic devices, operating at ever-decreasing space and time scales, where the aforementioned simplifications go under serious question. Here, through a combined use of leading-edge experimental and simulation techniques, we unravel a novel interplay between global shape and nanoscopic roughness. This interplay significantly affects the early-stage energy budget, controlling front propagation in corrugated microchannels. We find that such a budget is governed by a two-scale phenomenon: The global geometry sets the conditions for small-scale structures to develop and propagate ahead of the main front. These small-scale structures probe the fine-scale details of the wall geometry (nanocorrugations), and the additional friction they experience slows the entire front. We speculate that such a two-scale mechanism may provide a fairly general scenario to account for extra dissipative phenomena occurring in capillary flows with nanocorrugated walls.

  1. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  2. Deconvolution of antibody affinities and concentrations by non-linear regression analysis of competitive ELISA data.

    PubMed

    Stevens, F J; Bobrovnik, S A

    2007-12-01

    Physiological responses of the adaptive immune system are polyclonal in nature whether induced by a naturally occurring infection, by vaccination to prevent infection or, in the case of animals, by challenge with antigen to generate reagents of research or commercial significance. The composition of the polyclonal responses is distinct to each individual or animal and changes over time. Differences exist in the affinities of the constituents and their relative proportion of the responsive population. In addition, some of the antibodies bind to different sites on the antigen, whereas other pairs of antibodies are sterically restricted from concurrent interaction with the antigen. Even if generation of a monoclonal antibody is the ultimate goal of a project, the quality of the resulting reagent is ultimately related to the characteristics of the initial immune response. It is probably impossible to quantitatively parse the composition of a polyclonal response to antigen. However, molecular regression allows further parameterization of a polyclonal antiserum in the context of certain simplifying assumptions. The antiserum is described as consisting of two competing populations of high- and low-affinity and unknown relative proportions. This simple model allows the quantitative determination of representative affinities and proportions. These parameters may be of use in evaluating responses to vaccines, to evaluating continuity of antibody production whether in vaccine recipients or animals used for the production of antisera, or in optimizing selection of donors for the production of monoclonal antibodies.

  3. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  4. Update on Canada.

    ERIC Educational Resources Information Center

    Hochstadt, John Webster

    1994-01-01

    Gift planning is increasing in Canada's colleges and universities to offset effects of retrenchment. New annuity vehicles and the emergence of university Crown Foundations offer tax breaks that support private giving to institutions. In addition, a simplified process for gifts is anticipated. (MSE)

  5. Simplified stratigraphic cross sections of the Eocene Green River Formation in the Piceance Basin, northwestern Colorado

    USGS Publications Warehouse

    Dietrich, John D.; Johnson, Ronald C.

    2013-01-01

    Thirteen stratigraphic cross sections of the Eocene Green River Formation in the Piceance Basin of northwestern Colorado are presented in this report. Originally published in a much larger and more detailed form by Self and others (2010), they are shown here in simplified, page-size versions that are easily accessed and used for presentation purposes. Modifications to the original versions include the elimination of the detailed lithologic columns and oil-yield histograms from Fischer assay data and the addition of ground-surface lines to give the depth of the various oil shale units shown on the cross section.

  6. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  7. From crater functions to partial differential equations: a new approach to ion bombardment induced nonequilibrium pattern formation.

    PubMed

    Norris, Scott A; Brenner, Michael P; Aziz, Michael J

    2009-06-03

    We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.

  8. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  9. How the continents deform: The evidence from tectonic geodesy

    USGS Publications Warehouse

    Thatcher, Wayne R.

    2009-01-01

    Space geodesy now provides quantitative maps of the surface velocity field within tectonically active regions, supplying constraints on the spatial distribution of deformation, the forces that drive it, and the brittle and ductile properties of continental lithosphere. Deformation is usefully described as relative motions among elastic blocks and is block-like because major faults are weaker than adjacent intact crust. Despite similarities, continental block kinematics differs from global plate tectonics: blocks are much smaller, typically ∼100–1000 km in size; departures from block rigidity are sometimes measurable; and blocks evolve over ∼1–10 Ma timescales, particularly near their often geometrically irregular boundaries. Quantitatively relating deformation to the forces that drive it requires simplifying assumptions about the strength distribution in the lithosphere. If brittle/elastic crust is strongest, interactions among blocks control the deformation. If ductile lithosphere is the stronger, its flow properties determine the surface deformation, and a continuum approach is preferable.

  10. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  11. Reduced-order modeling of soft robots

    PubMed Central

    Chenevier, Jean; González, David; Aguado, J. Vicente; Chinesta, Francisco

    2018-01-01

    We present a general strategy for the modeling and simulation-based control of soft robots. Although the presented methodology is completely general, we restrict ourselves to the analysis of a model robot made of hyperelastic materials and actuated by cables or tendons. To comply with the stringent real-time constraints imposed by control algorithms, a reduced-order modeling strategy is proposed that allows to minimize the amount of online CPU cost. Instead, an offline training procedure is proposed that allows to determine a sort of response surface that characterizes the response of the robot. Contrarily to existing strategies, the proposed methodology allows for a fully non-linear modeling of the soft material in a hyperelastic setting as well as a fully non-linear kinematic description of the movement without any restriction nor simplifying assumption. Examples of different configurations of the robot were analyzed that show the appeal of the method. PMID:29470496

  12. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  13. Modelling of the luminescent properties of nanophosphor coatings with different porosity

    NASA Astrophysics Data System (ADS)

    Kubrin, R.; Graule, T.

    2016-10-01

    Coatings of Y2O3:Eu nanophosphor with the effective refractive index of 1.02 were obtained by flame aerosol deposition (FAD). High-pressure cold compaction decreased the layer porosity from 97.3 to 40 vol % and brought about dramatic changes in the photoluminescent performance. Modelling of interdependence between the quantum yield, decay time of luminescence, and porosity of the nanophosphor films required a few basic simplifying assumptions. We confirmed that the properties of porous nanostructured coatings are most appropriately described by the nanocrystal cavity model of the radiative decay. All known effective medium equations resulted in seemingly underestimated values of the effective refractive index. While the best fit was obtained with the linear permittivity mixing rule, the influence of further effects, previously not accounted for, could not be excluded. We discuss the peculiarities in optical response of nanophopshors and suggest the directions for future research.

  14. Gravitational Radiation of a Vibrating Physical String as a Model for the Gravitational Emission of an Astrophysical Plasma

    NASA Astrophysics Data System (ADS)

    Lewis, Ray A.; Modanese, Giovanni

    Vibrating media offer an important testing ground for reconciling conflicts between General Relativity, Quantum Mechanics and other branches of physics. For sources like a Weber bar, the standard covariant formalism for elastic bodies can be applied. The vibrating string, however, is a source of gravitational waves which requires novel computational techniques, based on the explicit construction of a conserved and renormalized energy-momentum tensor. Renormalization (in a classical sense) is necessary to take into account the effect of external constraints, which affect the emission considerably. Our computation also relaxes usual simplifying assumptions like far-field approximation, spherical or plane wave symmetry, TT gauge and absence of internal interference. In a further step towards unification, the method is then adapted to give the radiation field of a transversal Alfven wave in a rarefied astrophysical plasma, where the tension is produced by an external static magnetic field.

  15. Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®

    NASA Astrophysics Data System (ADS)

    Nur Hidayat, Hardi; Gala Permana, Maximillian

    2017-12-01

    The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.

  16. Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.

    PubMed

    Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I

    2007-12-01

    A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.

  17. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  18. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  19. Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain

    PubMed Central

    Chis Ster, Irina; Ferguson, Neil M.

    2007-01-01

    Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582

  20. Mini-mast CSI testbed user's guide

    NASA Technical Reports Server (NTRS)

    Tanner, Sharon E.; Pappa, Richard S.; Sulla, Jeffrey L.; Elliott, Kenny B.; Miserentino, Robert; Bailey, James P.; Cooper, Paul A.; Williams, Boyd L., Jr.; Bruner, Anne M.

    1992-01-01

    The Mini-Mast testbed is a 20 m generic truss highly representative of future deployable trusses for space applications. It is fully instrumented for system identification and active vibrations control experiments and is used as a ground testbed at NASA-Langley. The facility has actuators and feedback sensors linked via fiber optic cables to the Advanced Real Time Simulation (ARTS) system, where user defined control laws are incorporated into generic controls software. The object of the facility is to conduct comprehensive active vibration control experiments on a dynamically realistic large space structure. A primary goal is to understand the practical effects of simplifying theoretical assumptions. This User's Guide describes the hardware and its primary components, the dynamic characteristics of the test article, the control law implementation process, and the necessary safeguards employed to protect the test article. Suggestions for a strawman controls experiment are also included.

Top