Sample records for simplifying assumptions including

  1. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  2. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  3. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  4. A Mass Tracking Formulation for Bubbles in Incompressible Flow

    DTIC Science & Technology

    2012-10-14

    incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow

  5. Marginal Loss Calculations for the DCOPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldridge, Brent; O'Neill, Richard P.; Castillo, Andrea R.

    2016-12-05

    The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today,more » but will include sufficient detail to demonstrate a few points with regard to the handling of losses.« less

  6. Creating Matched Samples Using Exact Matching. Statistical Report 2016-3

    ERIC Educational Resources Information Center

    Godfrey, Kelly E.

    2016-01-01

    By creating and analyzing matched samples, researchers can simplify their analyses to include fewer covariate variables, relying less on model assumptions, and thus generating results that may be easier to report and interpret. When two groups essentially "look" the same, it is easier to explore their differences and make comparisons…

  7. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    NASA Astrophysics Data System (ADS)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.

  8. Dynamically rich, yet parameter-sparse models for spatial epidemiology. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Jusup, Marko; Iwami, Shingo; Podobnik, Boris; Stanley, H. Eugene

    2015-12-01

    Since the very inception of mathematical modeling in epidemiology, scientists exploited the simplicity ingrained in the assumption of a well-mixed population. For example, perhaps the earliest susceptible-infectious-recovered (SIR) model developed by L. Reed and W.H. Frost in the 1920s [1], included the well-mixed assumption such that any two individuals in the population could meet each other. The problem was that, unlike many other simplifying assumptions used in epidemiological modeling whose validity holds in one situation or the other, well-mixed populations are almost non-existent in reality because the nature of human socio-economic interactions is, for the most part, highly heterogeneous (e.g. [2-6]).

  9. Analyses of School Commuting Data for Exposure Modeling Purposes

    EPA Science Inventory

    Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...

  10. Flux Jacobian Matrices For Equilibrium Real Gases

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  11. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  12. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    NASA Astrophysics Data System (ADS)

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  13. Investigations in a Simplified Bracketed Grid Approach to Metrical Structure

    ERIC Educational Resources Information Center

    Liu, Patrick Pei

    2010-01-01

    In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…

  14. Stirling Engine External Heat System Design with Heat Pipe Heater.

    DTIC Science & Technology

    1986-07-01

    Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the

  15. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  16. A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.

    2014-01-01

    Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.

  17. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  18. Pendulum Motion and Differential Equations

    ERIC Educational Resources Information Center

    Reid, Thomas F.; King, Stephen C.

    2009-01-01

    A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…

  19. On the coupling of fluid dynamics and electromagnetism at the top of the earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1985-01-01

    A kinematic approach to short-term geomagnetism has recently been based upon pre-Maxwell frozen-flux electromagnetism. A complete dynamic theory requires coupling fluid dynamics to electromagnetism. A geophysically plausible simplifying assumption for the vertical vorticity balance, namely that the vertical Lorentz torque is negligible, is introduced and its consequences are developed. The simplified coupled magnetohydrodynamic system is shown to conserve a variety of magnetic and vorticity flux integrals. These provide constraints on eligible models for the geomagnetic main field, its secular variation, and the horizontal fluid motions at the top of the core, and so permit a number of tests of the underlying assumptions.

  20. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  1. On numerical modeling of one-dimensional geothermal histories

    USGS Publications Warehouse

    Haugerud, R.A.

    1989-01-01

    Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.

  2. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  3. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  4. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  5. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  6. Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Harding, D. D.

    1977-01-01

    The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.

  7. Fitness extraction and the conceptual foundations of political biology.

    PubMed

    Boari, Mircea

    2005-01-01

    In well known formulations, political science, classical and neoclassical economics, and political economy have recognized as foundational a human impulse toward self-preservation. To employ this concept, modern social-sciences theorists have made simplifying assumptions about human nature and have then built elaborately upon their more incisive simplifications. Advances in biology, including advances in evolutionary theory, notably inclusive-fitness theory, have for decades now encouraged the reconsideration of such assumptions and, more ambitiously, the reconciliation of the social and life sciences. I ask if this reconciliation is feasible and test a path to the unification of politics and biology, called here "political biology." Two new notions, "fitness extraction" and "fitness exchange," are defined, then differentiated from each other, and lastly contrasted to cooperative gaming, the putative essential element of economics.

  8. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  9. Polymer flammability

    DOT National Transportation Integrated Search

    2005-05-01

    This report provides an overview of polymer flammability from a material science perspective and describes currently accepted test methods to quantify burning behavior. Simplifying assumptions about the gas and condensed phase processes of flaming co...

  10. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  11. How to Decide on Modeling Details: Risk and Benefit Assessment.

    PubMed

    Özilgen, Mustafa

    Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.

  12. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  13. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  14. International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1

    DTIC Science & Technology

    1998-01-01

    pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made

  15. Assessment of ecotoxicological risks related to depositing dredged materials from canals in northern France on soil.

    PubMed

    Perrodin, Yves; Babut, Marc; Bedell, Jean-Philippe; Bray, Marc; Clement, Bernard; Delolme, Cécile; Devaux, Alain; Durrieu, Claude; Garric, Jeanne; Montuelle, Bernard

    2006-08-01

    The implementation of an ecological risk assessment framework is presented for dredged material deposits on soil close to a canal and groundwater, and tested with sediment samples from canals in northern France. This framework includes two steps: a simplified risk assessment based on contaminant concentrations and a detailed risk assessment based on toxicity bioassays and column leaching tests. The tested framework includes three related assumptions: (a) effects on plants (Lolium perenne L.), (b) effects on aquatic organisms (Escherichia coli, Pseudokirchneriella subcapitata, Ceriodaphnia dubia, and Xenopus laevis) and (c) effects on groundwater contamination. Several exposure conditions were tested using standardised bioassays. According to the specific dredged material tested, the three assumptions were more or less discriminatory, soil and groundwater pollution being the most sensitive. Several aspects of the assessment procedure must now be improved, in particular assessment endpoint design for risks to ecosystems (e.g., integration of pollutant bioaccumulation), bioassay protocols and column leaching test design.

  16. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  17. Predator-prey Encounter Rates in Turbulent Environments: Consequences of Inertia Effects and Finite Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecseli, H. L.; Trulsen, J.

    2009-10-08

    Experimental as well as theoretical studies have demonstrated that turbulence can play an important role for the biosphere in marine environments, in particular also by affecting prey-predator encounter rates. Reference models for the encounter rates rely on simplifying assumptions of predators and prey being described as point particles moving passively with the local flow velocity. Based on simple arguments that can be tested experimentally we propose corrections for the standard expression for the encounter rates, where now finite sizes and Stokes drag effects are included.

  18. Fission product ion exchange between zeolite and a molten salt

    NASA Astrophysics Data System (ADS)

    Gougar, Mary Lou D.

    The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)

  19. Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics

    DTIC Science & Technology

    2012-09-13

    Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an

  20. Longitudinal stability in relation to the use of an automatic pilot

    NASA Technical Reports Server (NTRS)

    Klemin, Alexander; Pepper, Perry A; Wittner, Howard A

    1938-01-01

    The effect of restraint in pitching introduced by an automatic pilot upon the longitudinal stability of an airplane has been studied. Customary simplifying assumptions have been made in setting down the equations of motion, and the results of computations based on the simplified equations are presented to show the effect of an automatic pilot installed in an airplane of known dimensions and characteristics. The equations developed have been applied by making calculations for a Clark biplane and a Fairchild 22 monoplane.

  1. Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres

    NASA Astrophysics Data System (ADS)

    Cuomo, M.; dell'Isola, F.; Greco, L.

    2016-06-01

    Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.

  2. From puddles to planet: modeling approaches to vector-borne diseases at varying resolution and scale.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L

    2015-08-01

    Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Measuring Spatial Infiltration in Stormwater Control Measures: Results and Implications

    EPA Science Inventory

    This presentation will provide background information on research conducted by EPA-ORD on the use of soil moisture sensors in bioretention/bioinfiltration technologies to evaluate infiltration mechanisms and compares monitoring results to simplified modeling assumptions. A serie...

  4. Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling

    EPA Science Inventory

    Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...

  5. Design, dynamics and control of an Adaptive Singularity-Free Control Moment Gyroscope actuator for microspacecraft Attitude Determination and Control System

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sasi Prabhakaran

    Design, dynamics, control and implementation of a novel spacecraft attitude control actuator called the "Adaptive Singularity-free Control Moment Gyroscope" (ASCMG) is presented in this dissertation. In order to construct a comprehensive attitude dynamics model of a spacecraft with internal actuators, the dynamics of a spacecraft with an ASCMG, is obtained in the framework of geometric mechanics using the principles of variational mechanics. The resulting dynamics is general and complete model, as it relaxes the simplifying assumptions made in prior literature on Control Moment Gyroscopes (CMGs) and it also addresses the adaptive parameters in the dynamics formulation. The simplifying assumptions include perfect axisymmetry of the rotor and gimbal structures, perfect alignment of the centers of mass of the gimbal and the rotor etc. These set of simplifying assumptions imposed on the design and dynamics of CMGs leads to adverse effects on their performance and results in high manufacturing cost. The dynamics so obtained shows the complex nonlinear coupling between the internal degrees of freedom associated with an ASCMG and the spacecraft bus's attitude motion. By default, the general ASCMG cluster can function as a Variable Speed Control Moment Gyroscope, and reduced to function in CMG mode by spinning the rotor at constant speed, and it is shown that even when operated in CMG mode, the cluster can be free from kinematic singularities. This dynamics model is then extended to include the effects of multiple ASCMGs placed in the spacecraft bus, and sufficient conditions for non-singular ASCMG cluster configurations are obtained to operate the cluster both in VSCMG and CMG modes. The general dynamics model of the ASCMG is then reduced to that of conventional VSCMGs and CMGs by imposing the standard set of simplifying assumptions used in prior literature. The adverse effects of the simplifying assumptions that lead to the complexities in conventional CMG design, and how they lead to CMG singularities, are described. General ideas on control of the angular momentum of the spacecraft using changes in the momentum variables of a finite number of ASCMGs, are provided. Control schemes for agile and precise attitude maneuvers using ASCMG cluster in the absence of external torques and when the total angular momentum of the spacecraft is zero, is presented for both constant speed and variable speed modes. A Geometric Variational Integrator (GVI) that preserves the geometry of the state space and the conserved norm of the total angular momentum is constructed for numerical simulation and microcontroller implementation of the control scheme. The GVI is obtained by discretizing the Lagrangian of the rnultibody systems, in which the rigid body attitude is globally represented on the Lie group of rigid body rotations. Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on commercial smartphones and a bare minimum hardware prototype of an ASCMG using low cost COTS components is also described. A lightweight, dynamics model-free Variational Attitude Estimator (VAE) suitable for smartphone implementation is employed for attitude determination and the attitude control is performed by ASCMG actuators. The VAE scheme presented here is implemented and validated onboard an Unmanned Aerial Vehicle (UAV) platform and the real time performance is analyzed. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The mechatronics realization of the attitude determination through variational attitude estimation scheme and control implementation using ASCMG actuators are presented here. Experimental results of the attitude estimation (filtering) scheme using smartphone sensors as an Inertial Measurement Unit (IMU) on the Hardware In the Loop (HIL) simulator testbed are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at New Mexico State University, demonstrate the performance of this estimation scheme with the noisy raw data from the smartphone sensors. Keywords: Spacecraft, momentum exchange devices, control moment gyroscope, variational mechanics, geometric mechanics, variational integrators, attitude determination, attitude control, ADCS, estimation, ASCMG, VSCMG, cubesat, mechatronics, smartphone, Android, MEMS sensor, embedded programming, microcontroller, brushless DC drives, HIL simulation.

  6. Relating color working memory and color perception.

    PubMed

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. INTERNAL DOSE AND RESPONSE IN REAL-TIME.

    EPA Science Inventory

    Abstract: Rapid temporal fluctuations in exposure may occur in a number of situations such as accidents or other unexpected acute releases of airborne substances. Often risk assessments overlook temporal exposure patterns under simplifying assumptions such as the use of time-wei...

  8. Impact buckling of thin bars in the elastic range for any end condition

    NASA Technical Reports Server (NTRS)

    Taub, Josef

    1934-01-01

    Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.

  9. Simplifying the complexity of resistance heterogeneity in metastasis

    PubMed Central

    Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.

    2014-01-01

    The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979

  10. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  11. An approach to quantifying the efficiency of a Bayesian filter

    USDA-ARS?s Scientific Manuscript database

    Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...

  12. A Methodology for Developing Army Acquisition Strategies for an Uncertain Future

    DTIC Science & Technology

    2007-01-01

    manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are

  13. MODELING NITROGEN-CARBON CYCLING AND OXYGEN CONSUMPTION IN BOTTOM SEDIMENTS

    EPA Science Inventory

    A model framework is presented for simulating nitrogen and carbon cycling at the sediment–water interface, and predicting oxygen consumption by oxidation reactions inside the sediments. Based on conservation of mass and invoking simplifying assumptions, a coupled system of diffus...

  14. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.

  15. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  16. DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS

    EPA Science Inventory

    Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Johannes; Merle, Alexander; Totzauer, Maximilian

    We investigate the early Universe production of sterile neutrino Dark Matter by the decays of singlet scalars. All previous studies applied simplifying assumptions and/or studied the process only on the level of number densities, which makes it impossible to give statements about cosmic structure formation. We overcome these issues by dropping all simplifying assumptions (except for one we showed earlier to work perfectly) and by computing the full course of Dark Matter production on the level of non-thermal momentum distribution functions. We are thus in the position to study a broad range of aspects of the resulting settings and applymore » a broad set of bounds in a reliable manner. We have a particular focus on how to incorporate bounds from structure formation on the level of the linear power spectrum, since the simplistic estimate using the free-streaming horizon clearly fails for highly non-thermal distributions. Our work comprises the most detailed and comprehensive study of sterile neutrino Dark Matter production by scalar decays presented so far.« less

  18. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  19. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  20. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  1. Dynamic behaviour of thin composite plates for different boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com

    2014-12-10

    In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less

  2. Accounting for age structure and spatial structure in eco-evolutionary analyses of a large, mobile vertebrate.

    PubMed

    Waples, Robin S; Scribner, Kim; Moore, Jennifer; Draheim, Hope; Etter, Dwayne; Boersen, Mark

    2018-04-14

    The idealized concept of a population is integral to ecology, evolutionary biology, and natural resource management. To make analyses tractable, most models adopt simplifying assumptions, which almost inevitably are violated by real species in nature. Here we focus on both demographic and genetic estimates of effective population size per generation (Ne), the effective number of breeders per year (Nb), and Wright's neighborhood size (NS) for black bears (Ursus americanus) that are continuously distributed in the northern lower peninsula of Michigan, USA. We illustrate practical application of recently-developed methods to account for violations of two common, simplifying assumptions about populations: 1) reproduction occurs in discrete generations, and 2) mating occurs randomly among all individuals. We use a 9-year harvest dataset of >3300 individuals, together with genetic determination of 221 parent-offspring pairs, to estimate male and female vital rates, including age-specific survival, age-specific fecundity, and age-specific variance in fecundity (for which empirical data are rare). We find strong evidence for overdispersed variance in reproductive success of same-age individuals in both sexes, and we show that constraints on litter size have a strong influence on results. We also estimate that another life-history trait that is often ignored (skip breeding by females) has a relatively modest influence, reducing Nb by 9% and increasing Ne by 3%. We conclude that isolation by distance depresses genetic estimates of Nb, which implicitly assume a randomly-mating population. Estimated demographic NS (100, based on parent-offspring dispersal) was similar to genetic NS (85, based on regression of genetic distance and geographic distance), indicating that the >36,000 km2 study area includes about 4-5 black-bear neighborhoods. Results from this expansive data set provide important insight into effects of violating assumptions when estimating evolutionary parameters for long-lived, free-ranging species. In conjunction with recently-developed analytical methodology, the ready availability of non-lethal DNA sampling methods and the ability to rapidly and cheaply survey many thousands of molecular markers should facilitate eco-evolutionary studies like this for many more species in nature.

  3. Preliminary methodology to assess the national and regional impact of U.S. wind energy development on birds and bats

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2015-01-01

    Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.

  4. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  5. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  6. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1988-01-01

    The previously constructed one dimensional model for the simulated operation of an iodine laser assumed that the perfluoroalkyl iodide gas n-C3F7I was incompressible. The present study removes this simplifying assumption and considers n-C3F7I as a compressible fluid.

  7. A simplified analytical solution for thermal response of a one-dimensional, steady state transpiration cooling system in radiative and convective environment

    NASA Technical Reports Server (NTRS)

    Kubota, H.

    1976-01-01

    A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.

  8. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  9. BASEFLOW SEPARATION BASED ON ANALYTICAL SOLUTIONS OF THE BOUSSINESQ EQUATION. (R824995)

    EPA Science Inventory

    Abstract

    A technique for baseflow separation is presented based on similarity solutions of the Boussinesq equation. The method makes use of the simplifying assumptions that a horizontal impermeable layer underlies a Dupuit aquifer which is drained by a fully penetratin...

  10. Quasi 3D modeling of water flow in vadose zone and groundwater

    USDA-ARS?s Scientific Manuscript database

    The complexity of subsurface flow systems calls for a variety of concepts leading to the multiplicity of simplified flow models. One habitual simplification is based on the assumption that lateral flow and transport in unsaturated zone are not significant unless the capillary fringe is involved. In ...

  11. The Role of Semantic Clustering in Optimal Memory Foraging

    ERIC Educational Resources Information Center

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T.

    2015-01-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in…

  12. Scaling the Library Collection; A Simplified Method for Weighing the Variables

    ERIC Educational Resources Information Center

    Vagianos, Louis

    1973-01-01

    On the assumption that the physical properties of any information stock (book, etc.) offer the best foundation on which to develop satisfactory measurements for assessing library operations and developing library procedures, weight is suggested as the most useful variable for assessment and standardization. Advantages of this approach are…

  13. Dualisms in Higher Education: A Critique of Their Influence and Effect

    ERIC Educational Resources Information Center

    Macfarlane, Bruce

    2015-01-01

    Dualisms pervade the language of higher education research providing an over-simplified roadmap to the field. However, the lazy logic of their popular appeal supports the perpetuation of erroneous and often outdated assumptions about the nature of modern higher education. This paper explores nine commonly occurring dualisms:…

  14. A Comprehensive Real-World Distillation Experiment

    ERIC Educational Resources Information Center

    Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.

    2015-01-01

    Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…

  15. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  16. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  17. Compressive properties of passive skeletal muscle-the impact of precise sample geometry on parameter identification in inverse finite element analysis.

    PubMed

    Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias

    2012-10-11

    Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Quick and Easy Rate Equations for Multistep Reactions

    ERIC Educational Resources Information Center

    Savage, Phillip E.

    2008-01-01

    Students rarely see closed-form analytical rate equations derived from underlying chemical mechanisms that contain more than a few steps unless restrictive simplifying assumptions (e.g., existence of a rate-determining step) are made. Yet, work published decades ago allows closed-form analytical rate equations to be written quickly and easily for…

  19. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  20. Solubility and Thermodynamics: An Introductory Experiment

    NASA Astrophysics Data System (ADS)

    Silberman, Robert G.

    1996-05-01

    This article describes a laboratory experiment suitable for high school or freshman chemistry students in which the solubility of potassium nitrate is determined at several different temperatures. The data collected is used to calculate the equilibrium constant, delta G, delta H, and delta S for dissolution reaction. The simplifying assumptions are noted in the article.

  1. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  2. The Signal Importance of Noise

    ERIC Educational Resources Information Center

    Macy, Michael; Tsvetkova, Milena

    2015-01-01

    Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…

  3. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  4. Distinguishing Identical Particles and the Correct Counting of States

    ERIC Educational Resources Information Center

    de la Torre, A. C.; Martin, H. O.

    2009-01-01

    It is shown that quantum systems of identical particles can be treated as different when they are in well-differentiated states. This simplifying assumption allows for the consideration of quantum systems isolated from the rest of the universe and justifies many intuitive statements about identical systems. However, it is shown that this…

  5. Using Heat Pulses for Quantifying 3d Seepage Velocity in Groundwater-Surface Water Interactions, Considering Source Size, Regime, and Dispersion

    NASA Astrophysics Data System (ADS)

    Zlotnik, V. A.; Tartakovsky, D. M.

    2017-12-01

    The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles

  6. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    PubMed Central

    Morel, Yann G.; Favoretto, Fabio

    2017-01-01

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028

  7. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    PubMed

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  8. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  9. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  10. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  11. CMG-Augmented Control of a Hovering VTOL Platform

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Moerder, D. D.

    2007-01-01

    This paper describes how Control Moment Gyroscopes (CMGs) can be used for stability augmentation to a thrust vectoring system for a generic Vertical Take-Off and Landing platform. The response characteristics of the platform which uses only thrust vectoring and a second configuration which includes a single-gimbal CMG array are simulated and compared for hovering flight while subject to severe air turbulence. Simulation results demonstrate the effectiveness of a CMG array in its ability to significantly reduce the agility requirement on the thrust vectoring system. Albeit simplifying physical assumptions on a generic CMG configuration, the numerical results also suggest that reasonably sized CMGs will likely be sufficient for a small hovering vehicle.

  12. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  13. Elaboration Preferences and Differences in Learning Proficiency.

    ERIC Educational Resources Information Center

    Rohwer, William D., Jr.; Levin, Joel R.

    The major emphasis of this study is on the comparative validities of paired-associate learning tests and IQ tests in predicting reading achievement. The study engages in a brief review of earlier research in order to examine the validity of two assumptions--that the construction and/or the use of a tactic that simplifies a learning task is one of…

  14. 76 FR 58268 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-20

    ... simplify some assumptions and to make estimation methods consistent; and characterization as Agency burden...-1007 to (1) EPA online using http://www.regulations.gov (our preferred method), by e-mail to oppt.ncic...-HQ-OPPT-2010-1007, which is available for online viewing at http://www.regulations.gov , or in person...

  15. Test Review: Watson, G., & Glaser, E. M. (2010), "Watson-Glaser™ II Critical Thinking Appraisal." Washington State University, Pullman, USA

    ERIC Educational Resources Information Center

    Sternod, Latisha; French, Brian

    2016-01-01

    The Watson-Glaser™ II Critical Thinking Appraisal (Watson-Glaser II; Watson & Glaser, 2010) is a revised version of the "Watson-Glaser Critical Thinking Appraisal®" (Watson & Glaser, 1994). The Watson-Glaser II introduces a simplified model of critical thinking, consisting of three subdimensions: recognize assumptions, evaluate…

  16. Selected mesostructure properties in loblolly pine from Arkansas plantations

    Treesearch

    David E. Kretschmann; Steven M. Cramer; Roderic Lakes; Troy Schmidt

    2006-01-01

    Design properties of wood are currently established at the macroscale, assuming wood to be a homogeneous orthotropic material. The resulting variability from the use of such a simplified assumption has been handled by designing with lower percentile values and applying a number of factors to account for the wide statistical variation in properties. With managed...

  17. Estimation of effective population size in continuously distributed populations: There goes the neighborhood

    Treesearch

    M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples

    2013-01-01

    Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...

  18. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  19. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  20. Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols

    PubMed Central

    Nam, Junghyun; Kim, Moonseong

    2014-01-01

    We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863

  1. Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.

  2. Multi-Destination and Multi-Purpose Trip Effects in the Analysis of the Demand for Trips to a Remote Recreational Site

    NASA Astrophysics Data System (ADS)

    Martínez-Espiñeira, Roberto; Amoako-Tuffour, Joe

    2009-06-01

    One of the basic assumptions of the travel cost method for recreational demand analysis is that the travel cost is always incurred for a single purpose recreational trip. Several studies have skirted around the issue with simplifying assumptions and dropping observations considered as nonconventional holiday-makers or as nontraditional visitors from the sample. The effect of such simplifications on the benefit estimates remains conjectural. Given the remoteness of notable recreational parks, multi-destination or multi-purpose trips are not uncommon. This article examines the consequences of allocating travel costs to a recreational site when some trips were taken for purposes other than recreation and/or included visits to other recreational sites. Using a multi-purpose weighting approach on data from Gros Morne National Park, Canada, we conclude that a proper correction for multi-destination or multi-purpose trip is more of what is needed to avoid potential biases in the estimated effects of the price (travel-cost) variable and of the income variable in the trip generation equation.

  3. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  4. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  5. Stratosphere circulation on tidally locked ExoEarths

    NASA Astrophysics Data System (ADS)

    Carone, L.; Keppens, R.; Decin, L.; Henning, Th.

    2018-02-01

    Stratosphere circulation is important to interpret abundances of photochemically produced compounds like ozone which we aim to observe to assess habitability of exoplanets. We thus investigate a tidally locked ExoEarth scenario for TRAPPIST-1b, TRAPPIST-1d, Proxima Centauri b and GJ 667 C f with a simplified 3D atmosphere model and for different stratospheric wind breaking assumptions.

  6. 26 CFR 1.417(a)(3)-1 - Required explanation of qualified joint and survivor annuity and qualified preretirement survivor...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... grouping rules of paragraph (c)(2)(iii) of this section. Separate charts are provided for ages 55, 60, and...) Simplified presentations permitted—(A) Grouping of certain optional forms. Two or more optional forms of... starting date, a reasonable assumption for the age of the participant's spouse, or, in the case of a...

  7. A nonlinear theory for elastic plates with application to characterizing paper properties

    Treesearch

    M. W. Johnson; Thomas J. Urbanik

    1984-03-01

    A theory of thin plates which is physically as well as kinematically nonlinear is, developed and used to characterize elastic material behavior for arbitrary stretching and bending deformations. It is developed from a few clearly defined assumptions and uses a unique treatment of strain energy. An effective strain concept is introduced to simplify the theory to a...

  8. Sequential Auctions with Partially Substitutable Goods

    NASA Astrophysics Data System (ADS)

    Vetsikas, Ioannis A.; Jennings, Nicholas R.

    In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.

  9. Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios

    PubMed Central

    Shen, Rui; Suuberg, Eric M.

    2016-01-01

    There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures. PMID:28090133

  10. Impacts of Changes of Indoor Air Pressure and Air Exchange Rate in Vapor Intrusion Scenarios.

    PubMed

    Shen, Rui; Suuberg, Eric M

    2016-02-01

    There has, in recent years, been increasing interest in understanding the transport processes of relevance in vapor intrusion of volatile organic compounds (VOCs) into buildings on contaminated sites. These studies have included fate and transport modeling. Most such models have simplified the prediction of indoor air contaminant vapor concentrations by employing a steady state assumption, which often results in difficulties in reconciling these results with field measurements. This paper focuses on two major factors that may be subject to significant transients in vapor intrusion situations, including the indoor air pressure and the air exchange rate in the subject building. A three-dimensional finite element model was employed with consideration of daily and seasonal variations in these factors. From the results, the variations of indoor air pressure and air exchange rate are seen to contribute to significant variations in indoor air contaminant vapor concentrations. Depending upon the assumptions regarding the variations in these parameters, the results are only sometimes consistent with the reports of several orders of magnitude in indoor air concentration variations from field studies. The results point to the need to examine more carefully the interplay of these factors in order to quantitatively understand the variations in potential indoor air exposures.

  11. A simplified rotor system mathematical model for piloted flight dynamics simulation

    NASA Technical Reports Server (NTRS)

    Chen, R. T. N.

    1979-01-01

    The model was developed for real-time pilot-in-the-loop investigation of helicopter flying qualities. The mathematical model included the tip-path plane dynamics and several primary rotor design parameters, such as flapping hinge restraint, flapping hinge offset, blade Lock number, and pitch-flap coupling. The model was used in several exploratory studies of the flying qualities of helicopters with a variety of rotor systems. The basic assumptions used and the major steps involved in the development of the set of equations listed are described. The equations consisted of the tip-path plane dynamic equation, the equations for the main rotor forces and moments, and the equation for control phasing required to achieve decoupling in pitch and roll due to cyclic inputs.

  12. A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents

    NASA Technical Reports Server (NTRS)

    Rosen, J. M.; Hofmann, D. J.

    1977-01-01

    A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.

  13. Methods for determining the internal thrust of scramjet engine modules from experimental data

    NASA Technical Reports Server (NTRS)

    Voland, Randall T.

    1990-01-01

    Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.

  14. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  15. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  16. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  17. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE PAGES

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo; ...

    2018-05-17

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  18. Hypotheses of calculation of the water flow rate evaporated in a wet cooling tower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourillot, C.

    1983-08-01

    The method developed by Poppe at the University of Hannover to calculate the thermal performance of a wet cooling tower fill is presented. The formulation of Poppe is then validated using full-scale test data from a wet cooling tower at the power station at Neurath, Federal Republic of Germany. It is shown that the Poppe method predicts the evaporated water flow rate almost perfectly and the condensate content of the warm air with good accuracy over a wide range of ambient conditions. The simplifying assumptions of the Merkel theory are discussed, and the errors linked to these assumptions are systematicallymore » described, then illustrated with the test data.« less

  19. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  20. Simplifying Causal Complexity: How Interactions between Modes of Causal Induction and Information Availability Lead to Heuristic-Driven Reasoning

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Tutwiler, M. Shane

    2014-01-01

    This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…

  1. Evolution of basic equations for nearshore wave field

    PubMed Central

    ISOBE, Masahiko

    2013-01-01

    In this paper, a systematic, overall view of theories for periodic waves of permanent form, such as Stokes and cnoidal waves, is described first with their validity ranges. To deal with random waves, a method for estimating directional spectra is given. Then, various wave equations are introduced according to the assumptions included in their derivations. The mild-slope equation is derived for combined refraction and diffraction of linear periodic waves. Various parabolic approximations and time-dependent forms are proposed to include randomness and nonlinearity of waves as well as to simplify numerical calculation. Boussinesq equations are the equations developed for calculating nonlinear wave transformations in shallow water. Nonlinear mild-slope equations are derived as a set of wave equations to predict transformation of nonlinear random waves in the nearshore region. Finally, wave equations are classified systematically for a clear theoretical understanding and appropriate selection for specific applications. PMID:23318680

  2. A SImplified method for Segregation Analysis (SISA) to determine penetrance and expression of a genetic variant in a family.

    PubMed

    Møller, Pål; Clark, Neal; Mæhle, Lovise

    2011-05-01

    A method for SImplified rapid Segregation Analysis (SISA) to assess penetrance and expression of genetic variants in pedigrees of any complexity is presented. For this purpose the probability for recombination between the variant and the gene is zero. An assumption is that the variant of undetermined significance (VUS) is introduced into the family once only. If so, all family members in between two members demonstrated to carry a VUS, are obligate carriers. Probabilities for cosegregation of disease and VUS by chance, penetrance, and expression, may be calculated. SISA return values do not include person identifiers and need no explicit informed consent. There will be no ethical complications in submitting SISA return values to central databases. Values for several families may be combined. Values for a family may be updated by the contributor. SISA is used to consider penetrance whenever sequencing demonstrates a VUS in the known cancer-predisposing genes. Any family structure at hand in a genetic clinic may be used. One may include an extended lineage in a family through demonstrating the same VUS in a distant relative, and thereby identifying all obligate carriers in between. Such extension is a way to escape the selection biases through expanding the families outside the clusters used to select the families. © 2011 Wiley-Liss, Inc.

  3. Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.

    PubMed

    Pink, Joshua; Parker, Ben; Petrou, Stavros

    2016-09-01

    A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.

  4. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.

  5. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  6. Asymmetric Marcus-Hush theory for voltammetry.

    PubMed

    Laborda, Eduardo; Henstridge, Martin C; Batchelor-McAuley, Christopher; Compton, Richard G

    2013-06-21

    The current state-of-the-art in modeling the rate of electron transfer between an electroactive species and an electrode is reviewed. Experimental studies show that neither the ubiquitous Butler-Volmer model nor the more modern symmetric Marcus-Hush model are able to satisfactorily reproduce the experimental voltammetry for both solution-phase and surface-bound redox couples. These experimental deviations indicate the need for revision of the simplifying approximations used in the above models. Within this context, models encompassing asymmetry are considered which include different vibrational and solvation force constants for the electroactive species. The assumption of non-adiabatic electron transfer is also examined. These refinements have provided more satisfactory models of the electron transfer process and they enable us to gain more information about the microscopic characteristics of the system by means of simple electrochemical measurements.

  7. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  8. Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings

    PubMed Central

    LaFave, Daniel; Thomas, Duncan

    2016-01-01

    The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430

  9. Inferences about unobserved causes in human contingency learning.

    PubMed

    Hagmayer, York; Waldmann, Michael R

    2007-03-01

    Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.

  10. The evolutionary interplay of intergroup conflict and altruism in humans: a review of parochial altruism theory and prospects for its extension

    PubMed Central

    Rusch, Hannes

    2014-01-01

    Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. PMID:25253457

  11. Direct numerical simulation of leaky dielectrics with application to electrohydrodynamic atomization

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Desjardins, Olivier

    2013-11-01

    Electrohydrodynamics (EHD) have the potential to greatly enhance liquid break-up, as demonstrated in numerical simulations by Van Poppel et al. (JCP (229) 2010). In liquid-gas EHD flows, the ratio of charge mobility to charge convection timescales can be used to determine whether the charge can be assumed to exist in the bulk of the liquid or at the surface only. However, for EHD-aided fuel injection applications, these timescales are of similar magnitude and charge mobility within the fluid might need to be accounted for explicitly. In this work, a computational approach for simulating two-phase EHD flows including the charge transport equation is presented. Under certain assumptions compatible with a leaky dielectric model, charge transport simplifies to a scalar transport equation that is only defined in the liquid phase, where electric charges are present. To ensure consistency with interfacial transport, the charge equation is solved using a semi-Lagrangian geometric transport approach, similar to the method proposed by Le Chenadec and Pitsch (JCP (233) 2013). This methodology is then applied to EHD atomization of a liquid kerosene jet, and compared to results produced under the assumption of a bulk volumetric charge.

  12. A practical iterative PID tuning method for mechanical systems using parameter chart

    NASA Astrophysics Data System (ADS)

    Kang, M.; Cheong, J.; Do, H. M.; Son, Y.; Niculescu, S.-I.

    2017-10-01

    In this paper, we propose a method of iterative proportional-integral-derivative parameter tuning for mechanical systems that possibly possess hidden mechanical resonances, using a parameter chart which visualises the closed-loop characteristics in a 2D parameter space. We employ a hypothetical assumption that the considered mechanical systems have their upper limit of the derivative feedback gain, from which the feasible region in the parameter chart becomes fairly reduced and thus the gain selection can be extremely simplified. Then, a two-directional parameter search is carried out within the feasible region in order to find the best set of parameters. Experimental results show the validity of the assumption used and the proposed parameter tuning method.

  13. Consistency tests for the extraction of the Boer-Mulders and Sivers functions

    NASA Astrophysics Data System (ADS)

    Christova, E.; Leader, E.; Stoilov, M.

    2018-03-01

    At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  14. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  15. Multi-Mode 3D Kirchhoff Migration of Receiver Functions at Continental Scale With Applications to USArray

    NASA Astrophysics Data System (ADS)

    Millet, F.; Bodin, T.; Rondenay, S.

    2017-12-01

    The teleseismic scattered seismic wavefield contains valuable information about heterogeneities and discontinuities inside the Earth. By using fast Receiver Function (RF) migration techniques such as classic Common Conversion Point (CCP) stacks, one can easily interpret structural features down to a few hundred kilometers in the mantle. However, strong simplifying 1D assumptions limit the scope of these methods to structures that are relatively planar and sub-horizontal at local-to-regional scales, such as the Lithosphere-Asthenosphere Boundary and the Mantle Transition Zone discontinuities. Other more robust 2D and 2.5D methods rely on fewer assumptions but require considerable, sometime prohibitive, computation time. Following the ideas of Cheng (2017), we have implemented a simple fully 3D Prestack Kirchhoff RF migration scheme which uses the FM3D fast Eikonal solver to compute travel times and scattering angles. The method accounts for 3D elastic point scattering and includes free surface multiples, resulting in enhanced images of laterally varying dipping structures, such as subducted slabs. The method is tested for subduction structures using 2.5D synthetics generated with Raysum and 3D synthetics generated with specfem3D. Results show that dip angles, depths and lateral variations can be recovered almost perfectly. The approach is ideally suited for applications to dense regional datasets, including those collected across the Cascadia and Alaska subduction zones by USArray.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less

  17. Understanding the LIGO GW150914 event

    NASA Astrophysics Data System (ADS)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao

    2016-08-01

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.

  18. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  19. Some Basic Aspects of Magnetohydrodynamic Boundary-Layer Flows

    NASA Technical Reports Server (NTRS)

    Hess, Robert V.

    1959-01-01

    An appraisal is made of existing solutions of magnetohydrodynamic boundary-layer equations for stagnation flow and flat-plate flow, and some new solutions are given. Since an exact solution of the equations of magnetohydrodynamics requires complicated simultaneous treatment of the equations of fluid flow and of electromagnetism, certain simplifying assumptions are generally introduced. The full implications of these assumptions have not been brought out properly in several recent papers. It is shown in the present report that for the particular law of deformation which the magnetic lines are assumed to follow in these papers a magnet situated inside the missile nose would not be able to take up any drag forces; to do so it would have to be placed in the flow away from the nose. It is also shown that for the assumption that potential flow is maintained outside the boundary layer, the deformation of the magnetic lines is restricted to small values. The literature contains serious disagreements with regard to reductions in heat-transfer rates due to magnetic action at the nose of a missile, and these disagreements are shown to be mainly due to different interpretations of reentry conditions rather than more complicated effects. In the present paper the magnetohydrodynamic boundary-layer equation is also expressed in a simple form that is especially convenient for physical interpretation. This is done by adapting methods to magnetic forces which in the past have been used for forces due to gravitational or centrifugal action. The simplified approach is used to develop some new solutions of boundary-layer flow and to reinterpret certain solutions existing in the literature. An asymptotic boundary-layer solution representing a fixed velocity profile and shear is found. Special emphasis is put on estimating skin friction and heat-transfer rates.

  20. Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals

    NASA Astrophysics Data System (ADS)

    Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.

    2016-11-01

    There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.

  1. A cumulative energy demand indicator (CED), life cycle based, for industrial waste management decision making.

    PubMed

    Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba

    2013-12-01

    Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.

  2. Upscaling NZ-DNDC using a regression based meta-model to estimate direct N2O emissions from New Zealand grazed pastures.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaëlle E

    2016-01-01

    The availability of detailed input data frequently limits the application of process-based models at large scale. In this study, we produced simplified meta-models of the simulated nitrous oxide (N2O) emission factors (EF) using NZ-DNDC. Monte Carlo simulations were performed and the results investigated using multiple regression analysis to produce simplified meta-models of EF. These meta-models were then used to estimate direct N2O emissions from grazed pastures in New Zealand. New Zealand EF maps were generated using the meta-models with data from national scale soil maps. Direct emissions of N2O from grazed pasture were calculated by multiplying the EF map with a nitrogen (N) input map. Three meta-models were considered. Model 1 included only the soil organic carbon in the top 30cm (SOC30), Model 2 also included a clay content factor, and Model 3 added the interaction between SOC30 and clay. The median annual national direct N2O emissions from grazed pastures estimated using each model (assuming model errors were purely random) were: 9.6GgN (Model 1), 13.6GgN (Model 2), and 11.9GgN (Model 3). These values corresponded to an average EF of 0.53%, 0.75% and 0.63% respectively, while the corresponding average EF using New Zealand national inventory values was 0.67%. If the model error can be assumed to be independent for each pixel then the 95% confidence interval for the N2O emissions was of the order of ±0.4-0.7%, which is much lower than existing methods. However, spatial correlations in the model errors could invalidate this assumption. Under the extreme assumption that the model error for each pixel was identical the 95% confidence interval was approximately ±100-200%. Therefore further work is needed to assess the degree of spatial correlation in the model errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    NASA Astrophysics Data System (ADS)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that the simplified modeling approach using PWNP as a driving factor for the evaluation of N losses from drained agricultural catchments gave satisfactory results and we can propose this approach for a wider use.

  4. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  5. Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models

    USGS Publications Warehouse

    Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.

    2007-01-01

    each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.

  6. Prediction of the turbulent wake with second-order closure

    NASA Technical Reports Server (NTRS)

    Taulbee, D. B.; Lumley, J. L.

    1981-01-01

    A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.

  7. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  8. Comparing NEO Search Telescopes

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2016-04-01

    Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe.

  9. An acceptable role for computers in the aircraft design process

    NASA Technical Reports Server (NTRS)

    Gregory, T. J.; Roberts, L.

    1980-01-01

    Some of the reasons why the computerization trend is not wholly accepted are explored for two typical cases: computer use in the technical specialties and computer use in aircraft synthesis. The factors that limit acceptance are traced in part, to the large resources needed to understand the details of computer programs, the inability to include measured data as input to many of the theoretical programs, and the presentation of final results without supporting intermediate answers. Other factors are due solely to technical issues such as limited detail in aircraft synthesis and major simplifying assumptions in the technical specialties. These factors and others can be influenced by the technical specialist and aircraft designer. Some of these factors may become less significant as the computerization process evolves, but some issues, such as understanding large integrated systems, may remain issues in the future. Suggestions for improved acceptance include publishing computer programs so that they may be reviewed, edited, and read. Other mechanisms include extensive modularization of programs and ways to include measured information as part of the input to theoretical approaches.

  10. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.

    2015-01-01

    Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on several real-world problems. Two assumptions are frequently made to simplify the modeling of an interplanetary high-thrust trajectory during the preliminary design phase. The first assumption is that because the available thrust is high, any maneuvers performed by the spacecraft can be modeled as discrete changes in velocity. This assumption removes the need to integrate the equations of motion governing the motion of a spacecraft under thrust and allows the change in velocity to be modeled as an impulse and the expenditure of propellant to be modeled using the time-independent solution to Tsiolkovsky's rocket equation [1]. The second assumption is that the spacecraft moves primarily under the influence of the central body, i.e. the sun, and all other perturbing forces may be neglected in preliminary design. The path of the spacecraft may then be modeled as a series of conic sections. When a spacecraft performs a close approach to a planet, the central body switches from the sun to that planet and the trajectory is modeled as a hyperbola with respect to the planet. This is known as the method of patched conics. The impulsive and patched-conic assumptions significantly simplify the preliminary design problem.

  11. Characterizing dark matter at the LHC in Drell-Yan events

    NASA Astrophysics Data System (ADS)

    Capdevilla, Rodolfo M.; Delgado, Antonio; Martin, Adam; Raj, Nirmal

    2018-02-01

    Spectral features in LHC dileptonic events may signal radiative corrections coming from new degrees of freedom, notably dark matter and mediators. Using simplified models, and under a set of simplifying assumptions, we show how these features can reveal the fundamental properties of the dark sector, such as self-conjugation, spin and mass of dark matter, and the quantum numbers of the mediator. Distributions of both the invariant mass mℓℓ and the Collins-Soper scattering angle cos θCS are studied to pinpoint these properties. We derive constraints on the models from LHC measurements of mℓℓ and cos θCS, which are competitive with direct detection and jets+MET searches. We find that in certain scenarios the cos θCS spectrum provides the strongest bounds, underlining the importance of scattering angle measurements for nonresonant new physics.

  12. A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana

    2004-01-01

    The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.

  13. Non-driving intersegmental knee moments in cycling computed using a model that includes three-dimensional kinematics of the shank/foot and the effect of simplifying assumptions.

    PubMed

    Gregersen, Colin S; Hull, M L

    2003-06-01

    Assessing the importance of non-driving intersegmental knee moments (i.e. varus/valgus and internal/external axial moments) on over-use knee injuries in cycling requires the use of a three-dimensional (3-D) model to compute these loads. The objectives of this study were: (1) to develop a complete, 3-D model of the lower limb to calculate the 3-D knee loads during pedaling for a sample of the competitive cycling population, and (2) to examine the effects of simplifying assumptions on the calculations of the non-driving knee moments. The non-driving knee moments were computed using a complete 3-D model that allowed three rotational degrees of freedom at the knee joint, included the 3-D inertial loads of the shank/foot, and computed knee loads in a shank-fixed coordinate system. All input data, which included the 3-D segment kinematics and the six pedal load components, were collected from the right limb of 15 competitive cyclists while pedaling at 225 W and 90 rpm. On average, the peak varus and internal axial moments of 7.8 and 1.5 N m respectively occurred during the power stroke whereas the peak valgus and external axial moments of 8.1 and 2.5 N m respectively occurred during the recovery stroke. However, the non-driving knee moments were highly variable between subjects; the coefficients of variability in the peak values ranged from 38.7% to 72.6%. When it was assumed that the inertial loads of the shank/foot for motion out of the sagittal plane were zero, the root-mean-squared difference (RMSD) in the non-driving knee moments relative to those for the complete model was 12% of the peak varus/valgus moment and 25% of the peak axial moment. When it was also assumed that the knee joint was revolute with the flexion/extension axis perpendicular to the sagittal plane, the RMSD increased to 24% of the peak varus/valgus moment and 204% of the peak axial moment. Thus, the 3-D orientation of the shank segment has a major affect on the computation of the non-driving knee moments, while the inertial contributions to these loads for motions out of the sagittal plane are less important.

  14. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, John R.; Stahara, Stephen S.

    1985-01-01

    A bow wave was previously observed in the solar wind upstream of each of the first six planets. The observed properties of these bow waves and the associated plasma flows are outlined, and those features identified that can be described by a continuum magnetohydrodynamic flow theory. An account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies is provided. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared.

  15. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, J. R.; Stahara, S. S.

    1983-01-01

    A bow wave was previously observed in the solar wind upstream of each of the first six planets. The observed properties of these bow waves and the associated plasma flows are outlined, and those features identified that can be described by a continuum magnetohydrodynamic flow theory. An account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies is provided. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared.

  16. Electromagnetic reflection from multi-layered snow models

    NASA Technical Reports Server (NTRS)

    Linlor, W. I.; Jiracek, G. R.

    1975-01-01

    The remote sensing of snow-pack characteristics with surface installations or an airborne system could have important applications in water-resource management and flood prediction. To derive some insight into such applications, the electromagnetic response of multilayered snow models is analyzed in this paper. Normally incident plane waves at frequencies ranging from 1 MHz to 10 GHz are assumed, and amplitude reflection coefficients are calculated for models having various snow-layer combinations, including ice layers. Layers are defined by thickness, permittivity, and conductivity; the electrical parameters are constant or prescribed functions of frequency. To illustrate the effect of various layering combinations, results are given in the form of curves of amplitude reflection coefficients versus frequency for a variety of models. Under simplifying assumptions, the snow thickness and effective dielectric constant can be estimated from the variations of reflection coefficient as a function of frequency.

  17. Transport Phenomena During Equiaxed Solidification of Alloys

    NASA Technical Reports Server (NTRS)

    Beckermann, C.; deGroh, H. C., III

    1997-01-01

    Recent progress in modeling of transport phenomena during dendritic alloy solidification is reviewed. Starting from the basic theorems of volume averaging, a general multiphase modeling framework is outlined. This framework allows for the incorporation of a variety of microscale phenomena in the macroscopic transport equations. For the case of diffusion dominated solidification, a simplified set of model equations is examined in detail and validated through comparisons with numerous experimental data for both columnar and equiaxed dendritic growth. This provides a critical assessment of the various model assumptions. Models that include melt flow and solid phase transport are also discussed, although their validation is still at an early stage. Several numerical results are presented that illustrate some of the profound effects of convective transport on the final compositional and structural characteristics of a solidified part. Important issues that deserve continuing attention are identified.

  18. Understanding young stars - A history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stahler, S.W.

    1988-12-01

    The history of pre-main-sequence theory is briefly reviewed. The paper of Henyey et al. (1955) is seen as an important transitional work, one which abandoned previous simplifying assumptions yet failed to incorporate newer insights into the surface structure of late-type stars. The subsequent work of Hayashi and his contemporaries is outlined, with an emphasis on the underlying physical principles. Finally, the recent impact of protostar theory is discussed, and speculations are offered on future developments. 56 references.

  19. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  20. The Boltzmann equation in the difference formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szoke, Abraham; Brooks III, Eugene D.

    2015-05-06

    First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.

  1. Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model

    DTIC Science & Technology

    2009-08-01

    has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System

  2. Gas Diffusion in Fluids Containing Bubbles

    NASA Technical Reports Server (NTRS)

    Zak, M.; Weinberg, M. C.

    1982-01-01

    Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.

  3. Edemagenic gain and interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A

    2008-02-01

    Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.

  4. Determination of mechanical loading components of the equine metacarpus from measurements of strain during walking.

    PubMed

    Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S

    2006-08-01

    The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.

  5. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  6. Imaging System Model Crammed Into A 32K Microcomputer

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1986-12-01

    An imaging system model, based upon linear systems theory, has been developed for a microcomputer with less than 32K of free random access memory (RAM). The model includes diffraction effects of the optics, aberrations in the optics, and atmospheric propagation transfer functions. Variables include pupil geometry, magnitude and character of the aberrations, and strength of atmospheric turbulence ("seeing"). Both coherent and incoherent image formation can be evaluated. The techniques employed for crowding the model into a very small computer will be discussed in detail. Simplifying assumptions for the diffraction and aberration phenomena will be shown along with practical considerations in modeling the optical system. Particular emphasis is placed on avoiding inaccuracies in modeling the pupil and the associated optical transfer function knowing limits on spatial frequency content and resolution. Memory and runtime constraints are analyzed stressing the efficient use of assembly language Fourier transform routines, disk input/output, and graphic displays. The compromises between computer time, limited RAM, and scientific accuracy will be given with techniques for balancing these parameters for individual needs.

  7. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  8. A Backscattering Enhanced Microwave Canopy Scattering Model Based On MIMICS

    NASA Astrophysics Data System (ADS)

    Shen, X.; Hong, Y.; Qin, Q.; Chen, S.; Grout, T.

    2010-12-01

    For modeling microwave scattering of vegetated areas, several microwave canopy scattering models, based on the vectorized radiative transfer equation (VRT) that use different solving techniques, have been proposed in the past three decades. As an iterative solution of VRT at low orders, the Michigan Microwave Canopy Scattering Model (MIMICS) gives an analytical expression for calculating scattering as long as the volume scattering is not too strong. The most important usage of such models is to predict scattering in the backscattering direction. Unfortunately, the simplified assumption of MIMICS is that the scattering between the ground and trunk layers only includes the specular reflection. As a result, MIMICS includes a dominant coherent term which vanishes in the backscattering direction because this term contains a delta function factor of zero in this direction. This assumption needs reconsideration for accurately calculating the backscattering. In the framework of MIMICS, any incoherent terms that involve surface scattering factors must at least undergo surface scattering twice and volume scattering once. Therefore, these incoherent terms are usually very weak. On the other hand, due to the phenomenon of backscattering enhancement, the surface scattering in the backscattering direction is very strong compared to most other directions. Considering the facts discussed above, it is reasonable to add a surface backscattering term to the last equation of the boundary conditions of MIMICS. More terms appear in the final result including a backscattering coherent term which enhances the backscattering. The modified model is compared with the original MIMICS (version 1.0) using JPL/AIRSAR data from NASA Campaign Soil Moisture Experimental 2003 (SMEX03) and Washita92. Significant improvement is observed.

  9. Post-reionization Kinetic Sunyaev-Zel'dovich Signal in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Park, Hyunbae; Alvarez, Marcelo A.; Bond, John Richard

    2017-06-01

    Using Illustris, a state-of-art cosmological simulation of gravity, hydrodynamics, and star-formation, we revisit the calculation the angular power spectrum of the kinetic Sunyaev-Zel'dovich effect from the post-reionization (z < 6) epoch by Shaw et al. (2012). We not only report the updated value given by the analytical model used in previous studies, but go over the simplifying assumptions made in the model. The assumptions include using gas density for free electron density and neglecting the connected term arising due to the fourth order nature of momentum power spectrum that sources the signal. With these assumptions, Illustris gives slightly (˜ 10%) larger signal than in their work. Then, the signal is reduced by ˜ 20% when using actual free electron density in the calculation instead of gas density. This is because larger neutral fraction in dense regions results in loss of total free electron and suppression of fluctuations in free electron density. We find that the connected term can take up to half of the momentum power spectrum at z < 2. Due to a strong suppression of low-z signal by baryonic physics, the extra contribution from the connected term to ˜ 10% level although it may have been underestimated due to the finite box-size of Illustris. With these corrections, our result is very close to the original result of Shaw et al. (2012), which is well described by a simple power-law, D_l = 1.38[l/3000]0.21 μK^2, at 3000 < l < 10000.

  10. The evolutionary interplay of intergroup conflict and altruism in humans: a review of parochial altruism theory and prospects for its extension.

    PubMed

    Rusch, Hannes

    2014-11-07

    Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed 'parochial altruism', is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of 'parochial altruism'. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  12. Quantum State Tomography via Reduced Density Matrices.

    PubMed

    Xin, Tao; Lu, Dawei; Klassen, Joel; Yu, Nengkun; Ji, Zhengfeng; Chen, Jianxin; Ma, Xian; Long, Guilu; Zeng, Bei; Laflamme, Raymond

    2017-01-13

    Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However, it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work, we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally, we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results demonstrate the advantages and possible pitfalls of quantum state tomography with local measurements.

  13. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  14. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  15. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  16. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  17. Tax Subsidies for Employer-Sponsored Health Insurance: Updated Microsimulation Estimates and Sensitivity to Alternative Incidence Assumptions

    PubMed Central

    Miller, G Edward; Selden, Thomas M

    2013-01-01

    Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400

  18. SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012

    DTIC Science & Technology

    2012-01-01

    return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a

  19. Analysis of cavitation bubble dynamics in a liquid

    NASA Technical Reports Server (NTRS)

    Fontenot, L. L.; Lee, Y. C.

    1971-01-01

    General differential equations governing the dynamics of the cavitation bubbles in a liquid were derived. With the assumption of spherical symmetry the governing equations were simplified. Closed form solutions were obtained for simple cases, and numerical solutions were calculated for complicated ones. The growth and the collapse of the bubble were analyzed, oscillations of the bubbles were studied, and the stability of the cavitation bubbles were investigated. The results show that the cavitation bubbles are unstable, and the oscillation is not sinusoidal.

  20. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  1. Perfect gas effects in compressible rapid distortion theory

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.; Myers, M. R.

    1987-01-01

    The governing equations presented for small amplitude unsteady disturbances imposed on steady, compressible mean flows that are two-dimensional and nearly uniform have their basis in the perfect gas equations of state, and therefore generalize previous results based on tangent gas theory. While these equations are more complex, this complexity is required for adequate treatment of high frequency disturbances, especially when the base flow Mach number is large; under such circumstances, the simplifying assumptions of tangent gas theory are not applicable.

  2. The global strong solutions of Hasegawa-Mima-Charney-Obukhov equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hongjun; Zhu Anyou

    2005-08-01

    The quasigeostrophic model is a simplified geophysical fluid model at asymptotically high rotation rate or at small Rossby number. We consider the quasigeostrophic equation with no dissipation term which was obtained as an asymptotic model from the Euler equations with free surface under a quasigeostrophic velocity field assumption. It is called the Hasegawa-Mima-Charney-Obukhov equation, which also arises from plasmas theory. We use a priori estimates to get the global existence of strong solutions for an Hasegawa-Mima-Charney-Obukhov equation.

  3. Operationally efficient propulsion system study (OEPSS) data book. Volume 10; Air Augmented Rocket Afterburning

    NASA Technical Reports Server (NTRS)

    Farhangi, Shahram; Trent, Donnie (Editor)

    1992-01-01

    A study was directed towards assessing viability and effectiveness of an air augmented ejector/rocket. Successful thrust augmentation could potentially reduce a multi-stage vehicle to a single stage-to-orbit vehicle (SSTO) and, thereby, eliminate the associated ground support facility infrastructure and ground processing required by the eliminated stage. The results of this preliminary study indicate that an air augmented ejector/rocket propulsion system is viable. However, uncertainties resulting from simplified approach and assumptions must be resolved by further investigations.

  4. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  5. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  6. Mini-mast CSI testbed user's guide

    NASA Technical Reports Server (NTRS)

    Tanner, Sharon E.; Pappa, Richard S.; Sulla, Jeffrey L.; Elliott, Kenny B.; Miserentino, Robert; Bailey, James P.; Cooper, Paul A.; Williams, Boyd L., Jr.; Bruner, Anne M.

    1992-01-01

    The Mini-Mast testbed is a 20 m generic truss highly representative of future deployable trusses for space applications. It is fully instrumented for system identification and active vibrations control experiments and is used as a ground testbed at NASA-Langley. The facility has actuators and feedback sensors linked via fiber optic cables to the Advanced Real Time Simulation (ARTS) system, where user defined control laws are incorporated into generic controls software. The object of the facility is to conduct comprehensive active vibration control experiments on a dynamically realistic large space structure. A primary goal is to understand the practical effects of simplifying theoretical assumptions. This User's Guide describes the hardware and its primary components, the dynamic characteristics of the test article, the control law implementation process, and the necessary safeguards employed to protect the test article. Suggestions for a strawman controls experiment are also included.

  7. Environmental dynamics at orbital altitudes

    NASA Technical Reports Server (NTRS)

    Karr, G. R.

    1976-01-01

    The influence of real satellite aerodynamics on the determination of upper atmospheric density was investigated. A method of analysis of satellite drag data is presented which includes the effect of satellite lift and the variation in aerodynamic properties around the orbit. The studies indicate that satellite lift may be responsible for the observed orbit precession rather than a super rotation of the upper atmosphere. The influence of simplifying assumptions concerning the aerodynamics of objects in falling sphere analysis were evaluated and an improved method of analysis was developed. Wind tunnel data was used to develop more accurate drag coefficient relationships for studying altitudes between 80 and 120 Km. The improved drag coefficient relationships revealed a considerable error in previous falling sphere drag interpretation. These data were reanalyzed using the more accurate relationships. Theoretical investigations of the drag coefficient in the very low speed ratio region were also conducted.

  8. Weierstrass traveling wave solutions for dissipative Benjamin, Bona, and Mahony (BBM) equation

    NASA Astrophysics Data System (ADS)

    Mancas, Stefan C.; Spradlin, Greg; Khanal, Harihar

    2013-08-01

    In this paper the effect of a small dissipation on waves is included to find exact solutions to the modified Benjamin, Bona, and Mahony (BBM) equation by viscosity. Using Lyapunov functions and dynamical systems theory, we prove that when viscosity is added to the BBM equation, in certain regions there still exist bounded traveling wave solutions in the form of solitary waves, periodic, and elliptic functions. By using the canonical form of Abel equation, the polynomial Appell invariant makes the equation integrable in terms of Weierstrass ℘ functions. We will use a general formalism based on Ince's transformation to write the general solution of dissipative BBM in terms of ℘ functions, from which all the other known solutions can be obtained via simplifying assumptions. Using ODE (ordinary differential equations) analysis we show that the traveling wave speed is a bifurcation parameter that makes transition between different classes of waves.

  9. Random walk study of electron motion in helium in crossed electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1972-01-01

    Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.

  10. Multilayered models for electromagnetic reflection amplitudes

    NASA Technical Reports Server (NTRS)

    Linlor, W. I.

    1976-01-01

    The remote sensing of snowpack characteristics with surface installations or with an airborne system could have important applications in water resource management and flood prediction. To derive some insight into such applications, the electromagnetic response of multilayer snow models is analyzed. Normally incident plane waves are assumed at frequencies ranging from 10 to the 6th power to 10 to the 10th power Hz, and amplitude reflection coefficients are calculated for models having various snow-layer combinations, including ice sheets. Layers are defined by a thickness, permittivity, and conductivity; the electrical parameters are constant or prescribed functions of frequency. To illustrate the effect of various layering combinations, results are given in the form of curves of amplitude reflection coefficients, versus frequency for a variety of models. Under simplifying assumptions, the snow thickness and effective dielectric constant can be estimated from the reflection coefficient variations as a function of frequency.

  11. Population modeling and its role in toxicological studies

    USGS Publications Warehouse

    Sauer, John R.; Pendleton, Grey W.; Hoffman, David J.; Rattner, Barnett A.; Burton, G. Allen; Cairns, John

    1995-01-01

    A model could be defined as any abstraction from reality that is used to provide some insight into the real system. In this discussion, we will use a more specific definition that a model is a set of rules or assumptions, expressed as mathematical equations, that describe how animals survive and reproduce, including the external factors that affect these characteristics. A model simplifies a system, retaining essential components while eliminating parts that are not of interest. ecology has a rich history of using models to gain insight into populations, often borrowing both model structures and analysis methods from demographers and engineers. Much of the development of the models has been a consequence of mathematicians and physicists seeing simple analogies between their models and patterns in natural systems. Consequently, one major application of ecological modeling has been to emphasize the analysis of dynamics of often complex models to provide insight into theoretical aspects of ecology.1

  12. Modeling the fusion of cylindrical bioink particles in post bioprinting structure formation

    NASA Astrophysics Data System (ADS)

    McCune, Matt; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2015-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method to describe the shape evolution and biomechanical relaxation processes in multicellular systems. Thus, CPD is a useful tool to predict the outcome of post-printing structure formation in bioprinting. The predictive power of CPD has been demonstrated for multicellular systems composed of spherical bioink units. Experiments and computer simulations were related through an independently developed theoretical formalism based on continuum mechanics. Here we generalize the CPD formalism to (i) include cylindrical bioink particles often used in specific bioprinting applications, (ii) describe the more realistic experimental situation in which both the length and the volume of the cylindrical bioink units decrease during post-printing structure formation, and (iii) directly connect CPD simulations to the corresponding experiments without the need of the intermediate continuum theory inherently based on simplifying assumptions. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  13. Simulation of the Ozone Monitoring Instrument Aerosol Index Using the NASA Goddard Earth Observing System Aerosol Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Colarco, Peter R.; Gasso, Santiago; Ahn, Changwoo; Buchard, Virginie; Da Silva, Arlindo M.; Torres, Omar

    2017-01-01

    We provide an analysis of the commonly used Ozone Monitoring Instrument (OMI) aerosol index (AI) product for qualitative detection of the presence and loading of absorbing aerosols. In our analysis, simulated top-of-atmosphere (TOA) radiances are produced at the OMI footprints from a model atmosphere and aerosol profile provided by the NASA Goddard Earth Observing System (GEOS-5) Modern-Era Retrospective Analysis for Research and Applications aerosol reanalysis (MERRAero). Having established the credibility of the MERRAero simulation of the OMI AI in a previous paper we describe updates in the approach and aerosol optical property assumptions. The OMI TOA radiances are computed in cloud-free conditions from the MERRAero atmospheric state, and the AI is calculated. The simulated TOA radiances are fed to the OMI aerosol retrieval algorithms, and its retrieved AI (OMAERUV AI) is compared to the MERRAero calculated AI. Two main sources of discrepancy are discussed: one pertaining the OMI algorithm assumptions of the surface pressure, which are generally different from what the actual surface pressure of an observation is, and the other related to simplifying assumptions in the molecular atmosphere radiative transfer used in the OMI algorithms. Surface pressure assumptions lead to systematic biases in the OMAERUV AI, particularly over the oceans. Simplifications in the molecular radiative transfer lead to biases particularly in regions of topography intermediate to surface pressures of 600hPa and 1013.25hPa. Generally, the errors in the OMI AI due to these considerations are less than 0.2 in magnitude, though larger errors are possible, particularly over land. We recommend that future versions of the OMI algorithms use surface pressures from readily available atmospheric analyses combined with high-spatial resolution topographic maps and include more surface pressure nodal points in their radiative transfer lookup tables.

  14. Simulation of the Ozone Monitoring Instrument aerosol index using the NASA Goddard Earth Observing System aerosol reanalysis products

    NASA Astrophysics Data System (ADS)

    Colarco, Peter R.; Gassó, Santiago; Ahn, Changwoo; Buchard, Virginie; da Silva, Arlindo M.; Torres, Omar

    2017-11-01

    We provide an analysis of the commonly used Ozone Monitoring Instrument (OMI) aerosol index (AI) product for qualitative detection of the presence and loading of absorbing aerosols. In our analysis, simulated top-of-atmosphere (TOA) radiances are produced at the OMI footprints from a model atmosphere and aerosol profile provided by the NASA Goddard Earth Observing System (GEOS-5) Modern-Era Retrospective Analysis for Research and Applications aerosol reanalysis (MERRAero). Having established the credibility of the MERRAero simulation of the OMI AI in a previous paper we describe updates in the approach and aerosol optical property assumptions. The OMI TOA radiances are computed in cloud-free conditions from the MERRAero atmospheric state, and the AI is calculated. The simulated TOA radiances are fed to the OMI near-UV aerosol retrieval algorithms (known as OMAERUV) is compared to the MERRAero calculated AI. Two main sources of discrepancy are discussed: one pertaining to the OMI algorithm assumptions of the surface pressure, which are generally different from what the actual surface pressure of an observation is, and the other related to simplifying assumptions in the molecular atmosphere radiative transfer used in the OMI algorithms. Surface pressure assumptions lead to systematic biases in the OMAERUV AI, particularly over the oceans. Simplifications in the molecular radiative transfer lead to biases particularly in regions of topography intermediate to surface pressures of 600 and 1013.25 hPa. Generally, the errors in the OMI AI due to these considerations are less than 0.2 in magnitude, though larger errors are possible, particularly over land. We recommend that future versions of the OMI algorithms use surface pressures from readily available atmospheric analyses combined with high-spatial-resolution topographic maps and include more surface pressure nodal points in their radiative transfer lookup tables.

  15. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  16. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R; Cheung, A; Anderson, R

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less

  17. Snow Physics and Meltwater Hydrology of the SSiB Model Employed for Climate Simulation Studies with GEOS 2 GCM

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.

  18. Rethinking Use of the OML Model in Electric Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  19. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  20. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  1. Quantum-like dynamics applied to cognition: a consideration of available options

    NASA Astrophysics Data System (ADS)

    Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.

    2017-10-01

    Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  2. On the Weyl anomaly of 4D conformal higher spins: a holographic approach

    NASA Astrophysics Data System (ADS)

    Acevedo, S.; Aros, R.; Bugini, F.; Diaz, D. E.

    2017-11-01

    We present a first attempt to derive the full (type-A and type-B) Weyl anomaly of four dimensional conformal higher spin (CHS) fields in a holographic way. We obtain the type-A and type-B Weyl anomaly coefficients for the whole family of 4D CHS fields from the one-loop effective action for massless higher spin (MHS) Fronsdal fields evaluated on a 5D bulk Poincaré-Einstein metric with an Einstein metric on its conformal boundary. To gain access to the type-B anomaly coefficient we assume, for practical reasons, a Lichnerowicz-type coupling of the bulk Fronsdal fields with the bulk background Weyl tensor. Remarkably enough, our holographic findings under this simplifying assumption are certainly not unknown: they match the results previously found on the boundary counterpart under the assumption of factorization of the CHS higher-derivative kinetic operator into Laplacians of "partially massless" higher spins on Einstein backgrounds.

  3. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  4. Nonlinear Curvature Expressions for Combined Flapwise Bending, Chordwise Bending, Torsion and Extension of Twisted Rotor Blades

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Kaza, K. R. V.

    1976-01-01

    The nonlinear curvature expressions for a twisted rotor blade or a beam undergoing transverse bending in two planes, torsion, and extension were developed. The curvature expressions were obtained using simple geometric considerations. The expressions were first developed in a general manner using the geometrical nonlinear theory of elasticity. These general nonlinear expressions were then systematically reduced to four levels of approximation by imposing various simplifying assumptions, and in each of these levels the second degree nonlinear expressions were given. The assumptions were carefully stated and their implications with respect to the nonlinear theory of elasticity as applied to beams were pointed out. The transformation matrices between the deformed and undeformed blade-fixed coordinates, which were needed in the development of the curvature expressions, were also given for three of the levels of approximation. The present curvature expressions and transformation matrices were compared with corresponding expressions existing in the literature.

  5. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  6. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  7. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  8. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  9. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  10. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dohmen, Janik; Wallner, Herbert; Noack, Lena; Tosi, Nicola; Plesa, Ana-Catalina; Maurice, Maxime

    2015-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we initiate a benchmark comparison. In the initial phase of this endeavor we focus on the usefulness of the definitions of the test cases keeping the physics as sound as possible. The reference model is taken from the mantle convection benchmark, case 1b (Blanckenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and a Rayleigh number of 1e5. Melting is modelled assuming a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) three cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 includes batch melting, melt buoyancy (melt Rayleigh number Rm), depletion buoyancy and latent heat, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms) and qm approaching a statistical steady state. Case 3 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases should be carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. Variations of cases 1 - 3 may be tested, particularly studying the effect of melt extraction. The motivation of this presentation is to summarize first experiences, suggest possible modifications of the case definitions and call interested modelers to join this benchmark exercise. References: Blanckenbach, B., Busse, F., Christensen, U., Cserepes, L. Gun¬kel, D., Hansen, U., Har¬der, H. Jarvis, G., Koch, M., Mar¬quart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geo¬phys., 98, 23 38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  11. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dannberg, Juliane; Dohmen, Janik; Kalousova, Klara; Maurice, Maxim; Noack, Lena; Plesa, Ana; Soucek, Ondrej; Spiegelman, Marc; Thieulot, Cedric; Tosi, Nicola; Wallner, Herbert

    2016-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we carry out a benchmark comparison. The reference model is taken from the mantle convection benchmark, cases 1a to 1c (Blankenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and Rayleigh numbers of 104 to 10^6. Melting is modelled using a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) five cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 is identical to case 1 except that latent heat is switched on. Case 3 includes batch melting, melt buoyancy (melt Rayleigh number Rm) and depletion buoyancy, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms), the maximum and the total melt volume and qm approaching a statistical steady state. Case 4 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases are carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. For cases 1 - 3 very good agreement is achieved among the various participating codes. For case 4 melting/freezing formulations require some attention to avoid sub-solidus melt fractions. A case 5 is planned where all melt will be extracted and, reinserted in a shallow region above the melted plume. The motivation of this presentation is to summarize first experiences and to finalize the case definitions. References: Blankenbach, B., Busse, F., Christensen, U., Cserepes, L. Gunkel, D., Hansen, U., Harder, H. Jarvis, G., Koch, M., Marquart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geophys., 98, 23-38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  12. The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.

    I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.

  13. Effect of Selected Modeling Assumptions on Subsurface Radionuclide Transport Projections for the Potential Environmental Management Disposal Facility at Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Painter, Scott L.

    2016-06-28

    The Department of Energy’s Office of Environmental Management recently revised a Remedial Investigation/ Feasibility Study (RI/FS) that included an analysis of subsurface radionuclide transport at a potential new Environmental Management Disposal Facility (EMDF) in East Bear Creek Valley near Oak Ridge, Tennessee. The effect of three simplifying assumptions used in the RI/FS analyses are investigated using the same subsurface pathway conceptualization but with more flexible modeling tools. Neglect of vadose zone dispersion was found to be conservative or non-conservative, depending on the retarded travel time and the half-life. For a given equilibrium distribution coefficient, a relatively narrow range of half-lifemore » was identified for which neglect of vadose zone transport is non-conservative and radionuclide discharge into surface water is non-negligible. However, there are two additional conservative simplifications in the reference case that compensate for the non-conservative effect of neglecting vadose zone dispersion: the use of a steady infiltration rate and vadose zone velocity, and the way equilibrium sorption is used to represent transport in the fractured material of the saturated aquifer. With more realistic representations of all three processes, the RI/FS reference case was found to either provide a reasonably good approximation to the peak concentration or was significantly conservative (pessimistic) for all parameter combinations considered.« less

  14. Calculation of load distribution in stiffened cylindrical shells

    NASA Technical Reports Server (NTRS)

    Ebner, H; Koller, H

    1938-01-01

    Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.

  15. Orbital geocentric oddness. (French Title: Bizarreries orbitales géocentriques)

    NASA Astrophysics Data System (ADS)

    Bassinot, E.

    2013-09-01

    The purpose of this essay is to determine the geocentric path of our superior neighbour, the planet Mars called like the God of the war.In other words,the question is : seen from our blue planet, what is the orbit of the red one? Based upon three simplifying and justified assumptions,it is proved hereunder with a purely geometrical approach,that Mars describes a curve very close to the well known Pascal's snail. The loop shown by this curve explains easily the apparently erratic behaviour of Mars.

  16. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  17. Aerodynamic effects of nearly uniform slipstreams on thin wings in the transonic regime

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1980-01-01

    A simplified model is used to describe the interaction between a propeller slipstream and a wing in the transonic regime. The undisturbed slipstream boundary is assumed to coincide with an infinite circular cylinder. The undisturbed slipstream velocity is rotational and is a function of the radius only. In general, the velocity perturbation caused by introducing a wing into the slipstream is also rotational. By making small disturbance assumptions, however, the perturbation velocity becomes nearly potential, and an approximation for the flow is obtained by solving a potential equation.

  18. Interplanetary magnetic flux - Measurement and balance

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Gosling, J. T.; Phillips, J. L.

    1992-01-01

    A new method for determining the approximate amount of magnetic flux in various solar wind structures in the ecliptic (and solar rotation) plane is developed using single-spacecraft measurements in interplanetary space and making certain simplifying assumptions. The method removes the effect of solar wind velocity variations and can be applied to specific, limited-extent solar wind structures as well as to long-term variations. Over the 18-month interval studied, the ecliptic plane flux of coronal mass ejections was determined to be about 4 times greater than that of HFDs.

  19. A study of trends and techniques for space base electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    The use of dry processing and alternate dielectrics for processing wafers is reported. A two dimensional modeling program was written for the simulation of short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide-silicon interface. In solving current continuity equation, the program does not converge. However, solving the two dimensional Poisson equation for the potential distribution was achieved. The status of other 2D MOSFET simulation programs are summarized.

  20. The effect of the behavior of an average consumer on the public debt dynamics

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  1. A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems

    NASA Astrophysics Data System (ADS)

    Singh, A.

    2009-12-01

    A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.

  2. Numerical simulation of magmatic hydrothermal systems

    USGS Publications Warehouse

    Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.

    2010-01-01

    The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.

  3. Development of Curved-Plate Elements for the Exact Buckling Analysis of Composite Plate Assemblies Including Transverse Shear Effects

    NASA Technical Reports Server (NTRS)

    McGowan, David M.; Anderson, Melvin S.

    1998-01-01

    The analytical formulation of curved-plate non-linear equilibrium equations that include transverse-shear-deformation effects is presented. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Using several simplifying assumptions, linearized, stability equations are derived that describe the response of the plate just after bifurcation buckling occurs. These equations are then modified to allow the plate reference surface to be located a distance z(c), from the centroid surface which is convenient for modeling stiffened-plate assemblies. The implementation of the new theory into the VICONOPT buckling and vibration analysis and optimum design program code is described. Either classical plate theory (CPT) or first-order shear-deformation plate theory (SDPT) may be selected in VICONOPT. Comparisons of numerical results for several example problems with different loading states are made. Results from the new curved-plate analysis compare well with closed-form solution results and with results from known example problems in the literature. Finally, a design-optimization study of two different cylindrical shells subject to uniform axial compression is presented.

  4. Assessment of railway wagon suspension characteristics

    NASA Astrophysics Data System (ADS)

    Soukup, Josef; Skočilas, Jan; Skočilasová, Blanka

    2017-05-01

    The article deals with assessment of railway wagon suspension characteristics. The essential characteristics of a suspension are represented by the stiffness constants of the equivalent springs and the eigen frequencies of the oscillating movements in reference to the main central inertia axes of a vehicle. The premise of the experimental determination of these characteristic is the knowledge of the gravity center position and the knowledge of the main central inertia moments of the vehicle frame. The vehicle frame performs the general spatial movement when the vehicle moves. An analysis of the frame movement generally arises from Euler's equations which are commonly used for the description of the spherical movement. This solution is difficult and it can be simplified by applying the specific assumptions. The eigen frequencies solutions and solutions of the suspension stiffness are presented in the article. The solutions are applied on the railway and road vehicles with the simplifying conditions. A new method which assessed the characteristics is described in the article.

  5. Role of partial miscibility on pressure buildup due to constant rate injection of CO2 into closed and open brine aquifers

    NASA Astrophysics Data System (ADS)

    Mathias, Simon A.; Gluyas, Jon G.; GonzáLez MartíNez de Miguel, Gerardo J.; Hosseini, Seyyed A.

    2011-12-01

    This work extends an existing analytical solution for pressure buildup because of CO2 injection in brine aquifers by incorporating effects associated with partial miscibility. These include evaporation of water into the CO2 rich phase and dissolution of CO2 into brine and salt precipitation. The resulting equations are closed-form, including the locations of the associated leading and trailing shock fronts. Derivation of the analytical solution involves making a number of simplifying assumptions including: vertical pressure equilibrium, negligible capillary pressure, and constant fluid properties. The analytical solution is compared to results from TOUGH2 and found to accurately approximate the extent of the dry-out zone around the well, the resulting permeability enhancement due to residual brine evaporation, the volumetric saturation of precipitated salt, and the vertically averaged pressure distribution in both space and time for the four scenarios studied. While brine evaporation is found to have a considerable effect on pressure, the effect of CO2 dissolution is found to be small. The resulting equations remain simple to evaluate in spreadsheet software and represent a significant improvement on current methods for estimating pressure-limited CO2 storage capacity.

  6. Disentangling hot Jupiters formation location from their chemical composition

    NASA Astrophysics Data System (ADS)

    Ali-Dib, Mohamad

    2017-05-01

    We use a population synthesis model that includes pebbles and gas accretion, planetary migration and a simplified chemistry scheme to study the formation of hot Jupiters. Models have been proposed that these planets can either originate beyond the snowline and then move inwards via disc migration, or form 'in situ' inside the snowline. The goal of this work is to verify which of these two scenarios is more compatible with pebble accretion, and whether we can distinguish observationally between them via the resulting planetary C/O ratios and core masses. Our results show that for Solar system composition, the C/O ratios will vary but moderately between the two populations, since a significant amount of carbon and oxygen is locked up in refractories. In this case, we find a strong correlation between the carbon and oxygen abundances and core mass. The C/O ratio variations are more pronounced in the case where we assume that all carbon and oxygen are in volatiles. On average, hot Jupiters forming 'in situ' inside the snowline will have higher C/O ratios because they accrete less water ice. However, only hot Jupiters forming in situ around stars with C/O = 0.8 can have a C/O ratio higher than unity. We finally find that even with fast pebble accretion, it is significantly easier to form hot Jupiters outside of the snowline, even if forming these 'in situ' is not impossible in the limit of the simplifying assumptions made.

  7. Testing a thermo-chemo-hydro-geomechanical model for gas hydrate-bearing sediments using triaxial compression laboratory experiments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Helmig, R.; Wohlmuth, B.

    2017-09-01

    Natural gas hydrates are considered a potential resource for gas production on industrial scales. Gas hydrates contribute to the strength and stiffness of the hydrate-bearing sediments. During gas production, the geomechanical stability of the sediment is compromised. Due to the potential geotechnical risks and process management issues, the mechanical behavior of the gas hydrate-bearing sediments needs to be carefully considered. In this study, we describe a coupling concept that simplifies the mathematical description of the complex interactions occurring during gas production by isolating the effects of sediment deformation and hydrate phase changes. Central to this coupling concept is the assumption that the soil grains form the load-bearing solid skeleton, while the gas hydrate enhances the mechanical properties of this skeleton. We focus on testing this coupling concept in capturing the overall impact of geomechanics on gas production behavior though numerical simulation of a high-pressure isotropic compression experiment combined with methane hydrate formation and dissociation. We consider a linear-elastic stress-strain relationship because it is uniquely defined and easy to calibrate. Since, in reality, the geomechanical response of the hydrate-bearing sediment is typically inelastic and is characterized by a significant shear-volumetric coupling, we control the experiment very carefully in order to keep the sample deformations small and well within the assumptions of poroelasticity. The closely coordinated experimental and numerical procedures enable us to validate the proposed simplified geomechanics-to-flow coupling, and set an important precursor toward enhancing our coupled hydro-geomechanical hydrate reservoir simulator with more suitable elastoplastic constitutive models.

  8. Liquid-filled simplified hollow-core photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Liu, Shengnan; Gao, Wei; Li, Hongwei; Dong, Yongkang; Zhang, Hongying

    2014-12-01

    We report on a novel type of liquid-filled simplified hollow-core photonic crystal fibers (HC-PCFs), and investigate their transmission properties with various filling liquids, including water, ethanol and FC-40. The loss and dispersion characterizations are calculated for different fiber parameters including strut thickness and core diameter. The results show that there are still low-loss windows existing for liquid-filled simplified HC-PCFs, and the low-loss windows and dispersions can be easily tailored by filling different liquids. Such liquid-filled simplified HC-PCFs open up many possibilities for nonlinear fiber optics, optical, biochemical and medical sensing.

  9. A method to assess the population-level consequences of wind energy facilities on bird and bat species: Chapter

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2016-01-01

    For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.

  10. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, J. R.; Stahara, S. S.

    1984-01-01

    The observed properties of bow waves and the associated plasma flows are outlined, along with those features identified that can be described by a continuum magnetohydrodynamic flow theory as opposed to a more detailed multicomponent particle and field plasma theory. The primary objectives are to provide an account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared. A number of deficiencies, ambiguities, and suggestions for improvements are discussed, and several significant extensions of the theory required to provide comparable results for all planets, their satellites, and comets are noted.

  11. Thermohydrological conditions and silica redistribution near high-level nuclear wastes emplaced in saturated geological formations

    NASA Astrophysics Data System (ADS)

    Verma, A.; Pruess, K.

    1988-02-01

    Evaluation of the thermohydrological conditions near high-level nuclear waste packages is needed for the design of the waste canister and for overall repository design and performance assessment. Most available studies in this area have assumed that the hydrologic properties of the host rock are not changed in response to the thermal, mechanical, or chemical effects caused by waste emplacement. However, the ramifications of this simplifying assumption have not been substantiated. We have studied dissolution and precipitation of silica in liquid-saturated hydrothermal flow systems, including changes in formation porosity and permeability. Using numerical simulation, we compare predictions of thermohydrological conditions with and without inclusion of silica redistribution effects. Two cases were studied, namely, a canister-scale problem, and a repository-wide thermal convection problem and different pore models were employed for the permeable medium (fractures with uniform or nonuniform cross sections). We find that silica redistribution in water-saturated conditions does not have a sizeable effect on host rock and canister temperatures, pore pressures, or flow velocities.

  12. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  13. Solubility of lovastatin in a family of six alcohols: Ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol.

    PubMed

    Nti-Gyabaah, J; Chmielowski, R; Chan, V; Chiew, Y C

    2008-07-09

    Accurate experimental determination of solubility of active pharmaceutical ingredients (APIs) in solvents and its correlation, for solubility prediction, is essential for rapid design and optimization of isolation, purification, and formulation processes in the pharmaceutical industry. An efficient material-conserving analytical method, with in-line reversed HPLC separation protocol, has been developed to measure equilibrium solubility of lovastatin in ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol between 279 and 313K. Fusion enthalpy DeltaH(fus), melting point temperature, Tm, and the differential molar heat capacity, DeltaC(P), were determined by differential scanning calorimetry (DSC) to be 43,136J/mol, 445.5K, and 255J/(molK), respectively. In order to use the regular solution equation, simplified assumptions have been made concerning DeltaC(P), specifically, DeltaC(P)=0, or DeltaC(P)=DeltaS. In this study, we examined the extent to which these assumptions influence the magnitude of the ideal solubility of lovastatin, and determined that both assumptions underestimate the ideal solubility of lovastatin. The solubility data was used with the calculated ideal solubility to obtain activity coefficients, which were then fitted to the van't Hoff-like regular solution equation. Examination of the plots indicated that both assumptions give erroneous excess enthalpy of solution, H(infinity), and hence thermodynamically inconsistent activity coefficients. The order of increasing ideality, or solubility of lovastatin was butanol>1-propanol>1-pentanol>1-hexanol>1-octanol.

  14. The Valuation of Scientific and Technical Experiments

    NASA Technical Reports Server (NTRS)

    Williams, F. E.

    1972-01-01

    Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.

  15. Uncertainty about fundamentals and herding behavior in the FOREX market

    NASA Astrophysics Data System (ADS)

    Kaltwasser, Pablo Rovira

    2010-03-01

    It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.

  16. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  17. Ferromagnetic effects for nanofluid venture through composite permeable stenosed arteries with different nanosize particles

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Mustafa, M. T.

    2015-07-01

    In the present article ferromagnetic field effects for copper nanoparticles for blood flow through composite permeable stenosed arteries is discussed. The copper nanoparticles for the blood flow with water as base fluid with different nanosize particles is not explored upto yet. The equations for the Cu-water nanofluid are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics are utilized.

  18. Thermal effectiveness of multiple shell and tube pass TEMA E heat exchangers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pignotti, A.; Tamborenea, P.I.

    1988-02-01

    The thermal effectiveness of a TEMAE shell-and-tube heat exchanger, with one shell pass and an arbitrary number of tube passes, is determined under the usual simplifying assumptions of perfect transverse mixing of the shell fluid, no phase change, and temperature independence of the heat capacity rates and the heat transfer coefficient. A purely algebraic solution is obtained for the effectiveness as a functions of the heat capacity rate ratio and the number of heat transfer units. The case with M shell passes and N tube passes is easily expressed in terms of the single-shell-pass case.

  19. Generalization of low pressure, gas-liquid, metastable sound speed to high pressures

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1981-01-01

    A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.

  20. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.

  1. A general numerical model for wave rotor analysis

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel W.

    1992-01-01

    Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

  2. Refracted arrival waves in a zone of silence from a finite thickness mixing layer.

    PubMed

    Suzuki, Takao; Lele, Sanjiva K

    2002-02-01

    Refracted arrival waves which propagate in the zone of silence of a finite thickness mixing layer are analyzed using geometrical acoustics in two dimensions. Here, two simplifying assumptions are made: (i) the mean flow field is transversely sheared, and (ii) the mean velocity and temperature profiles approach the free-stream conditions exponentially. Under these assumptions, ray trajectories are analytically solved, and a formula for acoustic pressure amplitude in the far field is derived in the high-frequency limit. This formula is compared with the existing theory based on a vortex sheet corresponding to the low-frequency limit. The analysis covers the dependence on the Mach number as well as on the temperature ratio. The results show that both limits have some qualitative similarities, but the amplitude in the zone of silence at high frequencies is proportional to omega(-1/2), while that at low frequencies is proportional to omega(-3/2), omega being the angular frequency of the source.

  3. Tests for the extraction of Boer-Mulders functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot; Stoilov, Michail

    2017-12-01

    At present, the Boer-Mulders (BM) functions are extracted from asymmetry data using the simplifying assumption of their proportionality to the Sivers functions for each quark flavour. Here we present two independent tests for this assumption. We subject COMPASS data on semi-inclusive deep inelastic scattering on the 〈cos ϕh 〉, 〈cos 2ϕh 〉 and Sivers asymmetries to these tests. Our analysis shows that the tests are satisfied with the available data if the proportionality constant is the same for all quark flavours, which does not correspond to the flavour dependence used in existing analyses. This suggests that the published information on the BM functions may be unreliable. The 〈cos ϕh 〉, 〈cos 2ϕh 〉 asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  4. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Wärme und Feuchte instationär Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  5. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Warme und Feuchte instationar Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  6. Calculation of wall effects of flow on a perforated wall with a code of surface singularities

    NASA Astrophysics Data System (ADS)

    Piat, J. F.

    1994-07-01

    Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).

  7. An Object-Oriented Python Implementation of an Intermediate-Level Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.

    2008-12-01

    The Neelin-Zeng Quasi-equilibrium Tropical Circulation Model (QTCM1) is a Fortran-based intermediate-level atmospheric model that includes simplified treatments of several physical processes, including a GCM-like convective scheme and a land-surface scheme with representations of different surface types, evaporation, and soil moisture. This model has been used in studies of the Madden-Julian oscillation, ENSO, and vegetation-atmosphere interaction effects on climate. Through the assumption of convective quasi-equilibrium in the troposphere, the QTCM1 is able to include full nonlinearity, resolve baroclinic disturbances, and generate a reasonable climatology, all at low computational cost. One year of simulation on a PC at 5.625 × 3.75 degree longitude-latitude resolution takes under three minutes of wall-clock time. The Python package qtcm implements the QTCM1 in a mixed-language environment that retains the speed of compiled Fortran while providing the benefits of Python's object-oriented framework and robust suite of utilities and datatypes. We describe key programming constructs used to create this modeling environment: the decomposition of model runs into Python objects, providing methods so visualization tools are attached to model runs, and the use of Python's mutable datatypes (lists and dictionaries) to implement the "run list" entity, which enables total runtime control of subroutine execution order and content. The result is an interactive modeling environment where the traditional sequence of "hypothesis → modeling → visualization and analysis" is opened up and made nonlinear and flexible. In this environment, science tasks such as parameter-space exploration and testing alternative parameterizations can be easily automated, without the need for multiple versions of the model code interacting with a bevy of makefiles and shell scripts. The environment also simplifies interfacing of the atmospheric model to other models (e.g., hydrologic models, statistical models) and analysis tools. The tools developed for this package can be adapted to create similar environments for hydrologic models.

  8. Control-oriented modeling and adaptive backstepping control for a nonminimum phase hypersonic vehicle.

    PubMed

    Ye, Linqi; Zong, Qun; Tian, Bailing; Zhang, Xiuyun; Wang, Fang

    2017-09-01

    In this paper, the nonminimum phase problem of a flexible hypersonic vehicle is investigated. The main challenge of nonminimum phase is the prevention of dynamic inversion methods to nonlinear control design. To solve this problem, we make research on the relationship between nonminimum phase and backstepping control, finding that a stable nonlinear controller can be obtained by changing the control loop on the basis of backstepping control. By extending the control loop to cover the internal dynamics in it, the internal states are directly controlled by the inputs and simultaneously serve as virtual control for the external states, making it possible to guarantee output tracking as well as internal stability. Then, based on the extended control loop, a simplified control-oriented model is developed to enable the applicability of adaptive backstepping method. It simplifies the design process and releases some limitations caused by direct use of the no simplified control-oriented model. Next, under proper assumptions, asymptotic stability is proved for constant commands, while bounded stability is proved for varying commands. The proposed method is compared with approximate backstepping control and dynamic surface control and is shown to have superior tracking accuracy as well as robustness from the simulation results. This paper may also provide a beneficial guidance for control design of other complex systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu

    2001-07-01

    A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.

  10. A mathematics for medicine: The Network Effect

    PubMed Central

    West, Bruce J.

    2014-01-01

    The theory of medicine and its complement systems biology are intended to explain the workings of the large number of mutually interdependent complex physiologic networks in the human body and to apply that understanding to maintaining the functions for which nature designed them. Therefore, when what had originally been made as a simplifying assumption or a working hypothesis becomes foundational to understanding the operation of physiologic networks it is in the best interests of science to replace or at least update that assumption. The replacement process requires, among other things, an evaluation of how the new hypothesis affects modern day understanding of medical science. This paper identifies linear dynamics and Normal statistics as being such arcane assumptions and explores some implications of their retirement. Specifically we explore replacing Normal with fractal statistics and examine how the latter are related to non-linear dynamics and chaos theory. The observed ubiquity of inverse power laws in physiology entails the need for a new calculus, one that describes the dynamics of fractional phenomena and captures the fractal properties of the statistics of physiological time series. We identify these properties as a necessary consequence of the complexity resulting from the network dynamics and refer to them collectively as The Network Effect. PMID:25538622

  11. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  12. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  13. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozluk, M.J.; Vijay, D.K.

    Postulated catastrophic rupture of high-energy piping systems is the fundamental criterion used for the safety design basis of both light and heavy water nuclear generating stations. Historically, the criterion has been applied by assuming a nonmechanistic instantaneous double-ended guillotine rupture of the largest diameter pipes inside of containment. Nonmechanistic, meaning that the assumption of an instantaneous guillotine rupture has not been based on stresses in the pipe, failure mechanisms, toughness of the piping material, nor the dynamics of the ruptured pipe ends as they separate. This postulated instantaneous double-ended guillotine rupture of a pipe was a convenient simplifying assumption thatmore » resulted in a conservative accident scenario. This conservative accident scenario has now become entrenched as the design basis accident for: containment design, shutdown system design, emergency fuel cooling systems design, and to establish environmental qualification temperature and pressure conditions. The requirement to address dynamic effects associated with the postulated pipe rupture subsequently evolved. The dynamic effects include: potential missiles, pipe whipping, blowdown jets, and thermal-hydraulic transients. Recent advances in fracture mechanics research have demonstrated that certain pipes under specific conditions cannot crack in ways that result in an instantaneous guillotine rupture. Canadian utilities are now using mechanistic fracture mechanics and leak-before-break assessments on a case-by-case basis, in limited applications, to support licensing cases which seek exemption from the need to consider the various dynamic effects associated with postulated instantaneous catastrophic rupture of high-energy piping systems inside and outside of containment.« less

  15. Analysis of an algorithm for distributed recognition and accountability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, C.; Frincke, D.A.; Goan, T. Jr.

    1993-08-01

    Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approachmore » for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.« less

  16. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  17. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less

  18. Insights into Fourier Synthesis and Analysis: Part 2--A Simplified Mathematics.

    ERIC Educational Resources Information Center

    Moore, Guy S. M.

    1988-01-01

    Introduced is an analysis of a waveform into its Fourier components. Topics included are simplified analysis of a square waveform, a triangular waveform, half-wave rectified alternating current (AC), and impulses. Provides the mathematical expression and simplified analysis diagram of each waveform. (YP)

  19. 48 CFR 3409.570 - Certification at or below the simplified acquisition threshold.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the simplified acquisition threshold. 3409.570 Section 3409.570 Federal Acquisition Regulations System... threshold. By accepting any contract, including orders against any Schedule or Government-wide Acquisition Contract (GWAC), with the Department at or below the simplified acquisition threshold: (a) The contractor...

  20. Unique Results and Lessons Learned from the TSS Missions

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. The OML equation for electron current collection by a positively biased body is simply: I is approximately equal to A x j(sub eo) x 2/v??(phi)(exp ½) where A is the area of the body and phi is the electric potential on the body with respect to the plasma. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  1. Experimental quantification of the fluid dynamics in blood-processing devices through 4D-flow imaging: A pilot study on a real oxygenator/heat-exchanger module.

    PubMed

    Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto

    2018-02-08

    The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Study on low intensity aeration oxygenation model and optimization for shallow water

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi

    2018-02-01

    Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.

  3. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. An "age"-structured model of hematopoietic stem cell organization with application to chronic myeloid leukemia.

    PubMed

    Roeder, Ingo; Herberg, Maria; Horn, Matthias

    2009-04-01

    Previously, we have modeled hematopoietic stem cell organization by a stochastic, single cell-based approach. Applications to different experimental systems demonstrated that this model consistently explains a broad variety of in vivo and in vitro data. A major advantage of the agent-based model (ABM) is the representation of heterogeneity within the hematopoietic stem cell population. However, this advantage comes at the price of time-consuming simulations if the systems become large. One example in this respect is the modeling of disease and treatment dynamics in patients with chronic myeloid leukemia (CML), where the realistic number of individual cells to be considered exceeds 10(6). To overcome this deficiency, without losing the representation of the inherent heterogeneity of the stem cell population, we here propose to approximate the ABM by a system of partial differential equations (PDEs). The major benefit of such an approach is its independence from the size of the system. Although this mean field approach includes a number of simplifying assumptions compared to the ABM, it retains the key structure of the model including the "age"-structure of stem cells. We show that the PDE model qualitatively and quantitatively reproduces the results of the agent-based approach.

  5. A Summary of Revisions Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.; Perry, Boyd, III; Silva, Walter A.; Newman, Brett

    2014-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees-of - freedom and allows for the calculation of various airplane responses due to a discrete one-minus- cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and additional output data so as to provide a more useful and precise tool for gust load analysis. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs is included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  6. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  7. Modeling Neutral Densities Downstream of a Gridded Ion Thruster

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2010-01-01

    The details of a model for determining the neutral density downstream of a gridded ion thruster are presented. An investigation of the possible sources of neutrals emanating from and surrounding a NEXT ion thruster determined that the most significant contributors to the downstream neutral density include discharge chamber neutrals escaping through the perforated grids, neutrals escaping from the neutralizer, and vacuum facility background neutrals. For the neutral flux through the grids, near- and far-field equations are presented for rigorously determining the neutral density downstream of a cylindrical aperture. These equations are integrated into a spherically-domed convex grid geometry with a hexagonal array of apertures for determining neutral densities downstream of the ion thruster grids. The neutrals escaping from an off-center neutralizer are also modeled assuming diffuse neutral emission from the neutralizer keeper orifice. Finally, the effect of the surrounding vacuum facility neutrals is included and assumed to be constant. The model is used to predict the neutral density downstream of a NEXT ion thruster with and without neutralizer flow and a vacuum facility background pressure. The impacts of past simplifying assumptions for predicting downstream neutral densities are also examined for a NEXT ion thruster.

  8. Simplified failure sequence evaluation of reactor pressure vessel head corroding in-core instrumentation assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McVicker, J.P.; Conner, J.T.; Hasrouni, P.N.

    1995-11-01

    In-Core Instrumentation (ICI) assemblies located on a Reactor Pressure Vessel Head have a history of boric acid leakage. The acid tends to corrode the nuts and studs which fasten the flanges of the assembly, thereby compromising the assembly`s structural integrity. This paper provides a simplified practical approach in determining the likelihood of an undetected progressing assembly stud deterioration, which would lead to a catastrophic loss of reactor coolant. The structural behavior of the In-Core Instrumentation flanged assembly is modeled using an elastic composite section assumption, with the studs transmitting tension and the pressure sealing gasket experiencing compression. Using the abovemore » technique, one can calculate the flange relative deflection and the consequential coolant loss flow rate, as well as the stress in any stud. A solved real life example develops the expected failure sequence and discusses the exigency of leak detection for safe shutdown. In the particular case of Calvert Cliffs Nuclear Power Plant (CCNPP) it is concluded that leak detection occurs before catastrophic failure of the ICI flange assembly.« less

  9. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  10. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  11. Steady flow model user's guide

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.

    1984-07-01

    Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.

  12. A simplified building airflow model for agent concentration prediction.

    PubMed

    Jacques, David R; Smith, David A

    2010-11-01

    A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.

  13. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  14. Material matters: Analysis of density uncertainty in 3D printing and its consequences for radiation oncology.

    PubMed

    Craft, Daniel F; Kry, Stephen F; Balter, Peter; Salehpour, Mohammad; Woodward, Wendy; Howell, Rebecca M

    2018-04-01

    Using 3D printing to fabricate patient-specific devices such as tissue compensators, boluses, and phantoms is inexpensive and relatively simple. However, most 3D printing materials have not been well characterized, including their radiologic tissue equivalence. The purposes of this study were to (a) determine the variance in Hounsfield Units (HU) for printed objects, (b) determine if HU varies over time, and (c) calculate the clinical dose uncertainty caused by these material variations. For a sample of 10 printed blocks each of PLA, NinjaFlex, ABS, and Cheetah, the average HU and physical density were tracked at initial printing and over the course of 5 weeks, a typical timeframe for a standard course of radiotherapy. After initial printing, half the blocks were stored in open boxes, the other half in sealed bags with desiccant. Variances in HU and density over time were evaluated for the four materials. Various clinical photon and electron beams were used to evaluate potential errors in clinical depth dose as a function of assumptions made during treatment planning. The clinical depth error was defined as the distance between the correctly calculated 90% isodose line and the 90% isodose line calculated using clinically reasonable, but simplified, assumptions. The average HU measurements of individual blocks of PLA, ABS, NinjaFlex, and Cheetah varied by as much as 121, 30, 178, and 30 HU, respectively. The HU variation over 5 weeks was much smaller for all materials. The magnitude of clinical depth errors depended strongly on the material, energy, and assumptions, but some were as large as 9.0 mm. If proper quality assurance steps are taken, 3D printed objects can be used accurately and effectively in radiation therapy. It is critically important, however, that the properties of any material being used in patient care be well understood and accounted for. © 2018 American Association of Physicists in Medicine.

  15. Ambient mass density effects on the International Space Station (ISS) microgravity experiments

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Smith, R. E.

    1996-01-01

    The Marshall engineering thermosphere model was specified by NASA to be used in the design, development and testing phases of the International Space Station (ISS). The mass density is the atmospheric parameter which most affects the ISS. Under simplifying assumptions, the critical ambient neutral density required to produce one micro-g on the ISS is estimated using an atmospheric drag acceleration equation. Examples are presented for the critical density versus altitude, and for the critical density that is exceeded at least once a month and once per orbit during periods of low and high solar activity. An analysis of the ISS orbital decay is presented.

  16. Influence of thermal and velocity slip on the peristaltic flow of Cu-water nanofluid with magnetic field

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher

    2016-03-01

    The peristaltic flow of an incompressible viscous fluid containing copper nanoparticles in an asymmetric channel is discussed with thermal and velocity slip effects. The copper nanoparticles for the peristaltic flow water as base fluid is not explored so far. The equations for the purposed fluid model are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been calculated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The influence of various flow parameters on the flow and heat transfer characteristics is obtained.

  17. Metachronal wave analysis for non-Newtonian fluid under thermophoresis and Brownian motion effects

    NASA Astrophysics Data System (ADS)

    Shaheen, A.; Nadeem, S.

    This paper analyse the mathematical model of ciliary motion in an annulus. The effect of convective heat transfer and nanoparticle are taken into account. The governing equations of Jeffrey six-constant fluid along with heat and nanoparticle are modelled and then simplified by using long wavelength and low Reynolds number assumptions. The reduced equations are solved with the help of homotopy perturbation method. The obtained expressions for the velocity, temperature and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves. Streamlines has also been plotted at the last part of the paper.

  18. Magnetic field effects for copper suspended nanofluid venture through a composite stenosed arteries with permeable wall

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Butt, Adil Wahid

    2015-05-01

    In the present paper magnetic field effects for copper nanoparticles for blood flow through composite stenosis in arteries with permeable wall are discussed. The copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The effect of various flow parameters on the flow and heat transfer characteristics is utilized.

  19. The span as a fundamental factor in airplane design

    NASA Technical Reports Server (NTRS)

    Lachmann, G

    1928-01-01

    Previous theoretical investigations of steady curvilinear flight did not afford a suitable criterion of "maneuverability," which is very important for judging combat, sport and stunt-flying airplanes. The idea of rolling ability, i.e., of the speed of rotation of the airplane about its X axis in rectilinear flight at constant speed and for a constant, suddenly produced deflection of the ailerons, is introduced and tested under simplified assumptions for the air-force distribution over the span. This leads to the following conclusions: the effect of the moment of inertia about the X axis is negligibly small, since the speed of rotation very quickly reaches a uniform value.

  20. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  1. Actin-based propulsion of a microswimmer.

    PubMed

    Leshansky, A M

    2006-07-01

    A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.

  2. Theoretical analysis of oxygen diffusion at startup in an alkali metal heat pipe with gettered alloy walls

    NASA Technical Reports Server (NTRS)

    Tower, L. K.

    1973-01-01

    The diffusion of oxygen into, or out of, a gettered alloy exposed to oxygenated alkali liquid metal coolant, a situation arising in some high temperature heat transfer systems, was analyzed. The relation between the diffusion process and the thermochemistry of oxygen in the alloy and in the alkali metal was developed by making several simplifying assumptions. The treatment is therefore theoretical in nature. However, a practical example pertaining to the startup of a heat pipe with walls of T-111, a tantalum alloy, and lithium working fluid illustrates the use of the figures contained in the analysis.

  3. Combined effects of heat and mass transfer to magneto hydrodynamics oscillatory dusty fluid flow in a porous channel

    NASA Astrophysics Data System (ADS)

    Govindarajan, A.; Vijayalakshmi, R.; Ramamurthy, V.

    2018-04-01

    The main aim of this article is to study the combined effects of heat and mass transfer to radiative Magneto Hydro Dynamics (MHD) oscillatory optically thin dusty fluid in a saturated porous medium channel. Based on certain assumptions, the momentum, energy, concentration equations are obtained.The governing equations are non-dimensionalised, simplified and solved analytically. The closed analytical form solutions for velocity, temperature, concentration profiles are obtained. Numerical computations are presented graphically to show the salient features of various physical parameters. The shear stress, the rate of heat transfer and the rate of mass transfer are also presented graphically.

  4. Efficiency gain from elastic optical networks

    NASA Astrophysics Data System (ADS)

    Morea, Annalisa; Rival, Olivier

    2011-12-01

    We compare the cost-efficiency of optical networks based on mixed datarates (10, 40, 100Gb/s) and datarateelastic technologies. A European backbone network is examined under various traffic assumptions (volume of transported data per demand and total number of demands) to better understand the impact of traffic characteristics on cost-efficiency. Network dimensioning is performed for static and restorable networks (resilient to one-link failure). In this paper we will investigate the trade-offs between price of interfaces, reach and reconfigurability, showing that elastic solutions can be more cost-efficient than mixed-rate solutions because of the better compatibility between different datarates, increased reach of channels and simplified wavelength allocation.

  5. A Module Language for Typing by Contracts

    NASA Technical Reports Server (NTRS)

    Glouche, Yann; Talpin, Jean-Pierre; LeGuernic, Paul; Gautier, Thierry

    2009-01-01

    Assume-guarantee reasoning is a popular and expressive paradigm for modular and compositional specification of programs. It is becoming a fundamental concept in some computer-aided design tools for embedded system design. In this paper, we elaborate foundations for contract-based embedded system design by proposing a general-purpose module language based on a Boolean algebra allowing to define contracts. In this framework, contracts are used to negotiate the correctness of assumptions made on the definition of a component at the point where it is used and provides guarantees to its environment. We illustrate this presentation with the specification of a simplified 4-stroke engine model.

  6. Centrifugal inertia effects in two-phase face seal films

    NASA Technical Reports Server (NTRS)

    Basu, P.; Hughes, W. F.; Beeler, R. M.

    1987-01-01

    A simplified, semianalytical model has been developed to analyze the effect of centrifugal inertia in two-phase face seals. The model is based on the assumption of isothermal flow through the seal, but at an elevated temperature, and takes into account heat transfer and boiling. Using this model, seal performance curves are obtained with water as the working fluid. It is shown that the centrifugal inertia of the fluid reduces the load-carrying capacity dramatically at high speeds and that operational instability exists under certain conditions. While an all-liquid seal may be starved at speeds higher than a 'critical' value, leakage always occurs under boiling conditions.

  7. 48 CFR 436.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal... to exceed the simplified acquisition threshold. The HCA may include either or both procedures in FAR...

  8. On the combinatorics of sparsification.

    PubMed

    Huang, Fenix Wd; Reidys, Christian M

    2012-10-22

    We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.

  9. Multiscale Molecular Dynamics Model for Heterogeneous Charged Systems

    NASA Astrophysics Data System (ADS)

    Stanton, L. G.; Glosli, J. N.; Murillo, M. S.

    2018-04-01

    Modeling matter across large length scales and timescales using molecular dynamics simulations poses significant challenges. These challenges are typically addressed through the use of precomputed pair potentials that depend on thermodynamic properties like temperature and density; however, many scenarios of interest involve spatiotemporal variations in these properties, and such variations can violate assumptions made in constructing these potentials, thus precluding their use. In particular, when a system is strongly heterogeneous, most of the usual simplifying assumptions (e.g., spherical potentials) do not apply. Here, we present a multiscale approach to orbital-free density functional theory molecular dynamics (OFDFT-MD) simulations that bridges atomic, interionic, and continuum length scales to allow for variations in hydrodynamic quantities in a consistent way. Our multiscale approach enables simulations on the order of micron length scales and 10's of picosecond timescales, which exceeds current OFDFT-MD simulations by many orders of magnitude. This new capability is then used to study the heterogeneous, nonequilibrium dynamics of a heated interface characteristic of an inertial-confinement-fusion capsule containing a plastic ablator near a fuel layer composed of deuterium-tritium ice. At these scales, fundamental assumptions of continuum models are explored; features such as the separation of the momentum fields among the species and strong hydrogen jetting from the plastic into the fuel region are observed, which had previously not been seen in hydrodynamic simulations.

  10. Fuels for urban transit buses: a cost-effectiveness analysis.

    PubMed

    Cohen, Joshua T; Hammitt, James K; Levy, Jonathan I

    2003-04-15

    Public transit agencies have begun to adopt alternative propulsion technologies to reduce urban transit bus emissions associated with conventional diesel (CD) engines. Among the most popular alternatives are emission controlled diesel buses (ECD), defined here to be buses with continuously regenerating diesel particle filters burning low-sulfur diesel fuel, and buses burning compressed natural gas (CNG). This study uses a series of simplifying assumptions to arrive at first-order estimates for the incremental cost-effectiveness (CE) of ECD and CNG relative to CD. The CE ratio numerator reflects acquisition and operating costs. The denominator reflects health losses (mortality and morbidity) due to primary particulate matter (PM), secondary PM, and ozone exposure, measured as quality adjusted life years (QALYs). We find that CNG provides larger health benefits than does ECD (nine vs six QALYs annually per 1000 buses) but that ECD is more cost-effective than CNG (dollar 270 000 per QALY for ECD vs dollar 1.7 million to dollar 2.4 million for CNG). These estimates are subject to much uncertainty. We identify assumptions that contribute most to this uncertainty and propose potential research directions to refine our estimates.

  11. Measuring the diffusion of linguistic change

    PubMed Central

    Nerbonne, John

    2010-01-01

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335–357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question. PMID:21041207

  12. Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-10-01

    A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.

  13. NASA's Integrated Instrument Simulator Suite for Atmospheric Remote Sensing from Spaceborne Platforms (ISSARS) and Its Role for the ACE and GPM Missions

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn; hide

    2011-01-01

    Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.

  14. Measuring the diffusion of linguistic change.

    PubMed

    Nerbonne, John

    2010-12-12

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335-357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question.

  15. Reviewed approach to defining the Active Interlock Envelope for Front End ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Shaftan, T.

    To protect the NSLS-II Storage Ring (SR) components from damage from synchrotron radiation produced by insertion devices (IDs) the Active Interlock (AI) keeps electron beam within some safe envelope (a.k.a Active Interlock Envelope or AIE) in the transverse phase space. The beamline Front Ends (FEs) are designed under assumption that above certain beam current (typically 2 mA) the ID synchrotron radiation (IDSR) fan is produced by the interlocked e-beam. These assumptions also define how the ray tracing for FE is done. To simplify the FE ray tracing for typical uncanted ID it was decided to provide the Mechanical Engineering groupmore » with a single set of numbers (x,x’,y,y’) for the AIE at the center of the long (or short) ID straight section. Such unified approach to the design of the beamline Front Ends will accelerate the design process and save valuable human resources. In this paper we describe our new approach to defining the AI envelope and provide the resulting numbers required for design of the typical Front End.« less

  16. Gas Near a Wall: Shortened Mean Free Path, Reduced Viscosity, and the Manifestation of the Knudsen Layer in the Navier-Stokes Solution of a Shear Flow

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail V.

    2018-06-01

    For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. Under the simplifying assumption of constant temperature of the gas, the velocity profile becomes an explicit nonlinear function of the distance from the wall and exhibits a Knudsen boundary layer near the wall. To verify the validity of the obtained formula, we perform the Direct Simulation Monte Carlo computations for the shear flow of argon and nitrogen at normal density and temperature. We find excellent agreement between our velocity approximation and the computed DSMC velocity profiles both within the Knudsen boundary layer and away from it.

  17. Space-time codependence of retinal ganglion cells can be explained by novel and separable components of their receptive fields.

    PubMed

    Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M

    2016-09-01

    Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  18. The Landlab v1.0 OverlandFlow component: a Python tool for computing shallow-water flow across watersheds

    NASA Astrophysics Data System (ADS)

    Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan

    2017-04-01

    Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.

  19. Representation of Arctic mixed-phase clouds and the Wegener-Bergeron-Findeisen process in climate models: Perspectives from a cloud-resolving study

    NASA Astrophysics Data System (ADS)

    Fan, Jiwen; Ghan, Steven; Ovchinnikov, Mikhail; Liu, Xiaohong; Rasch, Philip J.; Korolev, Alexei

    2011-01-01

    Two types of Arctic mixed-phase clouds observed during the ISDAC and M-PACE field campaigns are simulated using a 3-dimensional cloud-resolving model (CRM) with size-resolved cloud microphysics. The modeled cloud properties agree reasonably well with aircraft measurements and surface-based retrievals. Cloud properties such as the probability density function (PDF) of vertical velocity (w), cloud liquid and ice, regimes of cloud particle growth, including the Wegener-Bergeron-Findeisen (WBF) process, and the relationships among properties/processes in mixed-phase clouds are examined to gain insights for improving their representation in General Circulation Models (GCMs). The PDF of the simulated w is well represented by a Gaussian function, validating, at least for arctic clouds, the subgrid treatment used in GCMs. The PDFs of liquid and ice water contents can be approximated by Gamma functions, and a Gaussian function can describe the total water distribution, but a fixed variance assumption should be avoided in both cases. The CRM results support the assumption frequently used in GCMs that mixed phase clouds maintain water vapor near liquid saturation. Thus, ice continues to grow throughout the stratiform cloud but the WBF process occurs in about 50% of cloud volume where liquid and ice co-exist, predominantly in downdrafts. In updrafts, liquid and ice particles grow simultaneously. The relationship between the ice depositional growth rate and cloud ice strongly depends on the capacitance of ice particles. The simplified size-independent capacitance of ice particles used in GCMs could lead to large deviations in ice depositional growth.

  20. Optimum flight paths of turbojet aircraft

    NASA Technical Reports Server (NTRS)

    Miele, Angelo

    1955-01-01

    The climb of turbojet aircraft is analyzed and discussed including the accelerations. Three particular flight performances are examined: minimum time of climb, climb with minimum fuel consumption, and steepest climb. The theoretical results obtained from a previous study are put in a form that is suitable for application on the following simplifying assumptions: the Mach number is considered an independent variable instead of the velocity; the variations of the airplane mass due to fuel consumption are disregarded; the airplane polar is assumed to be parabolic; the path curvatures and the squares of the path angles are disregarded in the projection of the equation of motion on the normal to the path; lastly, an ideal turbojet with performance independent of the velocity is involved. The optimum Mach number for each flight condition is obtained from the solution of a sixth order equation in which the coefficients are functions of two fundamental parameters: the ratio of minimum drag in level flight to the thrust and the Mach number which represents the flight at constant altitude and maximum lift-drag ratio.

  1. Characterisation and calculation of nonlinear vibrations in gas foil bearing systems-An experimental and numerical investigation

    NASA Astrophysics Data System (ADS)

    Hoffmann, Robert; Liebich, Robert

    2018-01-01

    This paper states a unique classification to understand the source of the subharmonic vibrations of gas foil bearing (GFB) systems, which will experimentally and numerically tested. The classification is based on two cases, where an isolated system is assumed: Case 1 considers a poorly balance rotor, which results in increased displacement during operation and interacts with the nonlinear progressive structure. It is comparable to a Duffing-Oscillator. In contrast, for case 2 a well/perfectly balanced rotor is assumed. Hence, the only source of nonlinear subharmonic whirling results from the fluid film self-excitation. Experimental tests with different unbalance levels and GFB modifications confirm these assumptions. Furthermore, simulations are able to predict the self-excitations and synchronous and subharmonic resonances of the experimental test. The numerical model is based on a linearised eigenvalue problem. The GFB system uses linearised stiffness and damping parameters by applying a perturbation method on the Reynolds Equation. The nonlinear bump structure is simplified by a link-spring model. It includes Coulomb friction effects inside the elastic corrugated structure and captures the interaction between single bumps.

  2. Dynamic Simulation of a Periodic 10 K Sorption Cryocooler

    NASA Technical Reports Server (NTRS)

    Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.

    1994-01-01

    A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.

  3. Analytical evaluation of the trajectories of hypersonic projectiles launched into space

    NASA Astrophysics Data System (ADS)

    Stutz, John David

    An equation of motion has been derived that may be solved using simple analytic functions which describes the motion of a projectile launched from the surface of the Earth into space accounting for both Newtonian gravity and aerodynamic drag. The equation of motion is based upon the Kepler equation of motion differential and variable transformations with the inclusion of a decaying angular momentum driving function and appropriate simplifying assumptions. The new equation of motion is first compared to various numerical and analytical trajectory approximations in a non-rotating Earth reference frame. The Modified Kepler solution is then corrected to include Earth rotation and compared to a rotating Earth simulation. Finally, the modified equation of motion is used to predict the apogee and trajectory of projectiles launched into space by the High Altitude Research Project from 1961 to 1967. The new equation of motion allows for the rapid equalization of projectile trajectories and intercept solutions that may be used to calculate firing solutions to enable ground launched projectiles to intercept or rendezvous with targets in low Earth orbit such as ballistic missiles.

  4. Partially coherent electron transport in terahertz quantum cascade lasers based on a Markovian master equation for the density matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonasson, O.; Karimi, F.; Knezevic, I.

    2016-08-01

    We derive a Markovian master equation for the single-electron density matrix, applicable to quantum cascade lasers (QCLs). The equation conserves the positivity of the density matrix, includes off-diagonal elements (coherences) as well as in-plane dynamics, and accounts for electron scattering with phonons and impurities. We use the model to simulate a terahertz-frequency QCL, and compare the results with both experiment and simulation via nonequilibrium Green's functions (NEGF). We obtain very good agreement with both experiment and NEGF when the QCL is biased for optimal lasing. For the considered device, we show that the magnitude of coherences can be a significantmore » fraction of the diagonal matrix elements, which demonstrates their importance when describing THz QCLs. We show that the in-plane energy distribution can deviate far from a heated Maxwellian distribution, which suggests that the assumption of thermalized subbands in simplified density-matrix models is inadequate. As a result, we also show that the current density and subband occupations relax towards their steady-state values on very different time scales.« less

  5. Two-dimensional materials as catalysts for energy conversion

    DOE PAGES

    Siahrostami, Samira; Tsai, Charlie; Karamad, Mohammadreza; ...

    2016-08-24

    Although large efforts have been dedicated to studying two-dimensional materials for catalysis, a rationalization of the associated trends in their intrinsic activity has so far been elusive. In the present work we employ density functional theory to examine a variety of two-dimensional materials, including, carbon based materials, hexagonal boron nitride ( h-BN), transition metal dichalcogenides (e.g. MoS 2, MoSe 2) and layered oxides, to give an overview of the trends in adsorption energies. By examining key reaction intermediates relevant to the oxygen reduction, and oxygen evolution reactions we find that binding energies largely follow the linear scaling relationships observed formore » pure metals. Here, this observation is very important as it suggests that the same simplifying assumptions made to correlate descriptors with reaction rates in transition metal catalysts are also valid for the studied two-dimensional materials. By means of these scaling relations, for each reaction we also identify several promising candidates that are predicted to exhibit a comparable activity to the state-of-the-art catalysts.« less

  6. The effect of small-wave modulation on the electromagnetic bias

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto; Kim, Yunjin; Martin, Jan M.

    1992-01-01

    The effect of the modulation of small ocean waves by large waves on the physical mechanism of the EM bias is examined by conducting a numerical scattering experiment which does not assume the applicability of geometric optics. The modulation effect of the large waves on the small waves is modeled using the principle of conservation of wave action and includes the modulation of gravity-capillary waves. The frequency dependence and magnitude of the EM bias is examined for a simplified ocean spectral model as a function of wind speed. These calculations make it possible to assess the validity of previous assumptions made in the theory of the EM bias, with respect to both scattering and hydrodynamic effects. It is found that the geometric optics approximation is inadequate for predictions of the EM bias at typical radar altimeter frequencies, while the improved scattering calculations provide a frequency dependence of the EM bias which is in qualitative agreement with observation. For typical wind speeds, the EM bias contribution due to small-wave modulation is of the same order as that due to modulation by the nonlinearities of the large-scale waves.

  7. FDTD modeling of anisotropic nonlinear optical phenomena in silicon waveguides.

    PubMed

    Dissanayake, Chethiya M; Premaratne, Malin; Rukhlenko, Ivan D; Agrawal, Govind P

    2010-09-27

    A deep insight into the inherent anisotropic optical properties of silicon is required to improve the performance of silicon-waveguide-based photonic devices. It may also lead to novel device concepts and substantially extend the capabilities of silicon photonics in the future. In this paper, for the first time to the best of our knowledge, we present a three-dimensional finite-difference time-domain (FDTD) method for modeling optical phenomena in silicon waveguides, which takes into account fully the anisotropy of the third-order electronic and Raman susceptibilities. We show that, under certain realistic conditions that prevent generation of the longitudinal optical field inside the waveguide, this model is considerably simplified and can be represented by a computationally efficient algorithm, suitable for numerical analysis of complex polarization effects. To demonstrate the versatility of our model, we study polarization dependence for several nonlinear effects, including self-phase modulation, cross-phase modulation, and stimulated Raman scattering. Our FDTD model provides a basis for a full-blown numerical simulator that is restricted neither by the single-mode assumption nor by the slowly varying envelope approximation.

  8. Direct vibro-elastography FEM inversion in Cartesian and cylindrical coordinate systems without the local homogeneity assumption

    NASA Astrophysics Data System (ADS)

    Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.

    2015-05-01

    To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.

  9. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  10. Impact of Moist Physics Complexity on Tropical Cyclone Simulations from the Hurricane Weather Research and Forecast System

    NASA Astrophysics Data System (ADS)

    Kalina, E. A.; Biswas, M.; Newman, K.; Grell, E. D.; Bernardet, L.; Frimel, J.; Carson, L.

    2017-12-01

    The parameterization of moist physics in numerical weather prediction models plays an important role in modulating tropical cyclone structure, intensity, and evolution. The Hurricane Weather Research and Forecast system (HWRF), the National Oceanic and Atmospheric Administration's operational model for tropical cyclone prediction, uses the Scale-Aware Simplified Arakawa-Schubert (SASAS) cumulus scheme and a modified version of the Ferrier-Aligo (FA) microphysics scheme to parameterize moist physics. The FA scheme contains a number of simplifications that allow it to run efficiently in an operational setting, which includes prescribing values for hydrometeor number concentrations (i.e., single-moment microphysics) and advecting the total condensate rather than the individual hydrometeor species. To investigate the impact of these simplifying assumptions on the HWRF forecast, the FA scheme was replaced with the more complex double-moment Thompson microphysics scheme, which individually advects cloud ice, cloud water, rain, snow, and graupel. Retrospective HWRF forecasts of tropical cyclones that occurred in the Atlantic and eastern Pacific ocean basins from 2015-2017 were then simulated and compared to those produced by the operational HWRF configuration. Both traditional model verification metrics (i.e., tropical cyclone track and intensity) and process-oriented metrics (e.g., storm size, precipitation structure, and heating rates from the microphysics scheme) will be presented and compared. The sensitivity of these results to the cumulus scheme used (i.e., the operational SASAS versus the Grell-Freitas scheme) also will be examined. Finally, the merits of replacing the moist physics schemes that are used operationally with the alternatives tested here will be discussed from a standpoint of forecast accuracy versus computational resources.

  11. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  12. Ionic transport in high-energy-density matter

    DOE PAGES

    Stanton, Liam G.; Murillo, Michael S.

    2016-04-08

    Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less

  13. Assessment of historical masonry pillars reinforced by CFRP strips

    NASA Astrophysics Data System (ADS)

    Fedele, Roberto; Rosati, Giampaolo; Biolzi, Luigi; Cattaneo, Sara

    2014-10-01

    In this methodological study, the ultimate response of masonry pillars strengthened by externally bonded Carbon Fiber Reinforced Polymer (CFRP) was investigated. Historical bricks were derived from a XVII century rural building, whilst a high strength mortar was utilized for the joints. The conventional experimental information, concerning the overall reaction force and relative displacements provided by "point" sensors (LVDTs and clip gauge), were herein enriched with no-contact, full-field kinematic measurements provided by 2D Digital Image Correlation (2D DIC). Experimental information were critically compared with prediction provided by an advanced three-dimensional models, based on nonlinear finite elements under the simplifying assumption of perfect adhesion between the reinforcement and the support.

  14. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  15. Managed care for Medicare: some considerations in designing effective information provision programs.

    PubMed

    Jayanti, R K

    2001-01-01

    Consumer information-processing theory provides a useful framework for policy makers concerned with regulating information provided by managed care organizations. The assumption that consumers are rational information processors and providing more information is better is questioned in this paper. Consumer research demonstrates that when faced with an uncertain decision, consumers adopt simplifying strategies leading to sub-optimal choices. A discussion on how consumers process risk information and the effects of various informational formats on decision outcomes is provided. Categorization theory is used to propose guidelines with regard to providing effective information to consumers choosing among competing managed care plans. Public policy implications borne out of consumer information-processing theory conclude the article.

  16. A modified Friedmann equation

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Watabiki, Y.

    2017-12-01

    We recently formulated a model of the universe based on an underlying W3-symmetry. It allows the creation of the universe from nothing and the creation of baby universes and wormholes for spacetimes of dimension 2, 3, 4, 6 and 10. Here we show that the classical large time and large space limit of these universes is one of exponential fast expansion without the need of a cosmological constant. Under a number of simplifying assumptions, our model predicts that w = ‑1.2 in the case of four-dimensional spacetime. The possibility of obtaining a w-value less than ‑1 is linked to the ability of our model to create baby universes and wormholes.

  17. Towards realistic modelling of spectral line formation - lessons learnt from red giants

    NASA Astrophysics Data System (ADS)

    Lind, Karin

    2015-08-01

    Many decades of quantitative spectroscopic studies of red giants have revealed much about the formation histories and interlinks between the main components of the Galaxy and its satellites. Telescopes and instrumentation are now able to deliver high-resolution data of superb quality for large stellar samples and Galactic archaeology has entered a new era. At the same time, we have learnt how simplifying physical assumptions in the modelling of spectroscopic data can bias the interpretations, in particular one-dimensional homogeneity and local thermodynamic equilibrium (LTE). I will present lessons learnt so far from non-LTE spectral line formation in 3D radiation-hydrodynamic atmospheres of red giants, the smaller siblings of red supergiants.

  18. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  19. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  20. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  1. Resonant behaviour of MHD waves on magnetic flux tubes. I - Connection formulae at the resonant surfaces. II - Absorption of sound waves by sunspots

    NASA Technical Reports Server (NTRS)

    Sakurai, Takashi; Goossens, Marcel; Hollweg, Joseph V.

    1991-01-01

    The present method of addressing the resonance problems that emerge in such MHD phenomena as the resonant absorption of waves at the Alfven resonance point avoids solving the fourth-order differential equation of dissipative MHD by recourse to connection formulae across the dissipation layer. In the second part of this investigation, the absorption of solar 5-min oscillations by sunspots is interpreted as the resonant absorption of sounds by a magnetic cylinder. The absorption coefficient is interpreted (1) analytically, under certain simplifying assumptions, and numerically, under more general conditions. The observed absorption coefficient magnitude is explained over suitable parameter ranges.

  2. Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle

    NASA Technical Reports Server (NTRS)

    Ciepluch, Carl C.

    1960-01-01

    Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.

  3. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  4. Parachute dynamics and stability analysis. [using nonlinear differential equations of motion

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. K.; Engdahl, R. A.

    1974-01-01

    The nonlinear differential equations of motion for a general parachute-riser-payload system are developed. The resulting math model is then applied for analyzing the descent dynamics and stability characteristics of both the drogue stabilization phase and the main descent phase of the space shuttle solid rocket booster (SRB) recovery system. The formulation of the problem is characterized by a minimum number of simplifying assumptions and full application of state-of-the-art parachute technology. The parachute suspension lines and the parachute risers can be modeled as elastic elements, and the whole system may be subjected to specified wind and gust profiles in order to assess their effects on the stability of the recovery system.

  5. A practical method of predicting the loudness of complex electrical stimuli

    NASA Astrophysics Data System (ADS)

    McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.

    2003-04-01

    The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.

  6. The risk of collapse in abandoned mine sites: the issue of data uncertainty

    NASA Astrophysics Data System (ADS)

    Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi

    2016-04-01

    Ground collapses over abandoned underground mines constitute a new environmental risk in the world. The high risk associated with subsurface voids, together with lack of knowledge of the geometric and geomechanical features of mining areas, makes abandoned underground mines one of the current challenges for countries with a long mining history. In this study, a stability analysis of Montevecchia marl mine is performed in order to validate a general approach that takes into account the poor local information and the variability of the input data. The collapse risk was evaluated through a numerical approach that, starting with some simplifying assumptions, is able to provide an overview of the collapse probability. The final results is an easy-accessible-transparent summary graph that shows the collapse probability. This approach may be useful for public administrators called upon to manage this environmental risk. The approach tries to simplify this complex problem in order to achieve a roughly risk assessment, but, since it relies on just a small amount of information, any final user should be aware that a comprehensive and detailed risk scenario can be generated only through more exhaustive investigations.

  7. Lubrication Flows.

    ERIC Educational Resources Information Center

    Papanastasiou, Tasos C.

    1989-01-01

    Discusses fluid mechanics for undergraduates including the differential Navier-Stokes equations, dimensional analysis and simplified dimensionless numbers, control volume principles, the Reynolds lubrication equation for confined and free surface flows, capillary pressure, and simplified perturbation techniques. Provides a vertical dip coating…

  8. An Overview of Modifications Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.

    2013-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  9. Comparison of Water-Load Distributions Obtained during Seaplane Landings with Bureau of Aeronautics Specifications. TED No. NACA 2413

    NASA Technical Reports Server (NTRS)

    Smiley, Robert F.; Haines, Gilbert A.

    1949-01-01

    Bureau of Aeronautics Design Specifications SS-IC-2 for water loads in sheltered water are compared with experimental water loads obtained during a full--scale landing investigation. This investigation was conducted with a JRS-1 flying boat which has a 20 degrees dead-rise V-bottom with a partial chine flare. The range of landing conditions included airspeeds between 88 and 126 feet per second, sinking speeds between 1.6 and 9.1 feet per second, flight angles less than 6 degrees, and trims between 2 degrees and 12 degrees. Landings were moderate and were made in calm water. Measurements were obtained of maximum over-all loads, maximum pitching moments, and pressure distributions. Maximum experimental loads include over-all load factors of 2g, moments of 128,000 pound-feet, and maximum local pressures greater than 40 pounds per square inch. Experimental over-all loads are approximately one-half the design values, while local pressures are of the same order as or larger than pressures calculated from specifications for plating, stringer, floor, and frame design. The value of this comparison is limited, to some extent, by the moderate conditions of the test and by the necessary simplifying assumptions used in comparing the specifications with the experimental loads.

  10. Welfare and Generational Equity in Sustainable Unfunded Pension Systems

    PubMed Central

    Auerbach, Alan J.; Lee, Ronald

    2011-01-01

    Using stochastic simulations we analyze how public pension structures spread the risks arising from demographic and economic shocks across generations. We consider several actual and hypothetical sustainable PAYGO pension structures, including: (1) versions of the US Social Security system with annual adjustments of taxes or benefits to maintain fiscal balance; (2) Sweden’s Notional Defined Contribution system and several variants developed to improve fiscal stability; and (3) the German system, which also includes annual adjustments to maintain fiscal balance. For each system, we present descriptive measures of uncertainty in representative outcomes for a typical generation and across generations. We then estimate expected utility for generations based on simplifying assumptions and incorporate these expected utility calculations in an overall social welfare measure. Using a horizontal equity index, we also compare the different systems’ performance in terms of how neighboring generations are treated. While the actual Swedish system smoothes stochastic fluctuations more than any other and produces the highest degree of horizontal equity, it does so by accumulating a buffer stock of assets that alleviates the need for frequent adjustments. In terms of social welfare, this accumulation of assets leads to a lower average rate of return that more than offsets the benefits of risk reduction, leaving systems with more frequent adjustments that spread risks broadly among generations as those most preferred. PMID:21818166

  11. A monolithic mass tracking formulation for bubbles in incompressible flow

    NASA Astrophysics Data System (ADS)

    Aanjaneya, Mridul; Patkar, Saket; Fedkiw, Ronald

    2013-08-01

    We devise a novel method for treating bubbles in incompressible flow that relies on the conservative advection of bubble mass and an associated equation of state in order to determine pressure boundary conditions inside each bubble. We show that executing this algorithm in a traditional manner leads to stability issues similar to those seen for partitioned methods for solid-fluid coupling. Therefore, we reformulate the problem monolithically. This is accomplished by first proposing a new fully monolithic approach to coupling incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions, and then subsequently making a number of simplifying assumptions on the air flow removing not only the nonlinearities but also the spatial variations of both the density and the pressure. The resulting algorithm is quite robust, has been shown to converge to known solutions for test problems, and has been shown to be quite effective on more realistic problems including those with multiple bubbles, merging and pinching, etc. Notably, this approach departs from a standard two-phase incompressible flow model where the air flow preserves its volume despite potentially large forces and pressure differentials in the surrounding incompressible fluid that should change its volume. Our bubbles readily change volume according to an isothermal equation of state.

  12. The experimental determination of the moments of inertia of airplanes by a simplified compound-pendulum method

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1948-01-01

    A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)

  13. Glistening-region model for multipath studies

    NASA Astrophysics Data System (ADS)

    Groves, Gordon W.; Chow, Winston C.

    1998-07-01

    The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.

  14. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  15. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  16. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  17. A multigenerational effect of parental age on offspring size but not fitness in common duckweed (Lemna minor).

    PubMed

    Barks, P M; Laird, R A

    2016-04-01

    Classic theories on the evolution of senescence make the simplifying assumption that all offspring are of equal quality, so that demographic senescence only manifests through declining rates of survival or fecundity. However, there is now evidence that, in addition to declining rates of survival and fecundity, many organisms are subject to age-related declines in the quality of offspring produced (i.e. parental age effects). Recent modelling approaches allow for the incorporation of parental age effects into classic demographic analyses, assuming that such effects are limited to a single generation. Does this 'single-generation' assumption hold? To find out, we conducted a laboratory study with the aquatic plant Lemna minor, a species for which parental age effects have been demonstrated previously. We compared the size and fitness of 423 laboratory-cultured plants (asexually derived ramets) representing various birth orders, and ancestral 'birth-order genealogies'. We found that offspring size and fitness both declined with increasing 'immediate' birth order (i.e. birth order with respect to the immediate parent), but only offspring size was affected by ancestral birth order. Thus, the assumption that parental age effects on offspring fitness are limited to a single generation does in fact hold for L. minor. This result will guide theorists aiming to refine and generalize modelling approaches that incorporate parental age effects into evolutionary theory on senescence. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  18. On the evolution of misunderstandings about evolutionary psychology.

    PubMed

    Young, J; Persell, R

    2000-04-01

    Some of the controversy surrounding evolutionary explanations of human behavior may be due to cognitive information-processing patterns that are themselves the result of evolutionary processes. Two such patterns are (1) the tendency to oversimplify information so as to reduce demand on cognitive resources and (2) our strong desire to generate predictability and stability from perceptions of the external world. For example, research on social stereotyping has found that people tend to focus automatically on simplified social-categorical information, to use such information when deciding how to behave, and to rely on such information even in the face of contradictory evidence. Similarly, an undying debate over nature vs. nurture is shaped by various data-reduction strategies that frequently oversimplify, and thus distort, the intent of the supporting arguments. This debate is also often marked by an assumption that either the nature or the nurture domain may be justifiably excluded at an explanatory level because one domain appears to operate in a sufficiently stable and predictable way for a particular argument. As a result, critiques in-veighed against evolutionary explanations of behavior often incorporate simplified--and erroneous--assumptions about either the mechanics of how evolution operates or the inevitable implications of evolution for understanding human behavior. The influences of these tendencies are applied to a discussion of the heritability of behavioral characteristics. It is suggested that the common view that Mendelian genetics can explain the heritability of complex behaviors, with a one-gene-one-trait process, is misguided. Complex behaviors are undoubtedly a product of a more complex interaction between genes and environment, ensuring that both nature and nurture must be accommodated in a yet-to-be-developed post-Mendelian model of genetic influence. As a result, current public perceptions of evolutionary explanations of behavior are handicapped by the lack of clear articulation of the relationship between inherited genes and manifest behavior.

  19. Simplified models for dark matter searches at the LHC

    NASA Astrophysics Data System (ADS)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both ss-channel and tt-channel scenarios. For ss-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  20. Computer algorithm for analyzing and processing borehole strainmeter data

    USGS Publications Warehouse

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  1. A comprehensive analysis of the evaporation of a liquid spherical drop.

    PubMed

    Sobac, B; Talbot, P; Haut, B; Rednikov, A; Colinet, P

    2015-01-15

    In this paper, a new comprehensive analysis of a suspended drop of a pure liquid evaporating into air is presented. Based on mass and energy conservation equations, a quasi-steady model is developed including diffusive and convective transports, and considering the non-isothermia of the gas phase. The main original feature of this simple analytical model lies in the consideration of the local dependence of the physico-chemical properties of the gas on the gas temperature, which has a significant influence on the evaporation process at high temperatures. The influence of the atmospheric conditions on the interfacial evaporation flux, molar fraction and temperature is investigated. Simplified versions of the model are developed to highlight the key mechanisms governing the evaporation process. For the conditions considered in this work, the convective transport appears to be opposed to the evaporation process leading to a decrease of the evaporation flux. However, this effect is relatively limited, the Péclet numbers happening to be small. In addition, the gas isothermia assumption never appears to be valid here, even at room temperature, due to the large temperature gradient that develops in the gas phase. These two conclusions are explained by the fact that heat transfer from the gas to the liquid appears to be the step limiting the evaporation process. Regardless of the complexity of the developed model, yet excluding extremely small droplets, the square of the drop radius decreases linearly over time (R(2) law). The assumptions of the model are rigorously discussed and general criteria are established, independently of the liquid-gas couple considered. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Population-level differences in disease transmission: A Bayesian analysis of multiple smallpox epidemics

    PubMed Central

    Elderd, Bret D.; Dwyer, Greg; Dukic, Vanja

    2013-01-01

    Estimates of a disease’s basic reproductive rate R0 play a central role in understanding outbreaks and planning intervention strategies. In many calculations of R0, a simplifying assumption is that different host populations have effectively identical transmission rates. This assumption can lead to an underestimate of the overall uncertainty associated with R0, which, due to the non-linearity of epidemic processes, may result in a mis-estimate of epidemic intensity and miscalculated expenditures associated with public-health interventions. In this paper, we utilize a Bayesian method for quantifying the overall uncertainty arising from differences in population-specific basic reproductive rates. Using this method, we fit spatial and non-spatial susceptible-exposed-infected-recovered (SEIR) models to a series of 13 smallpox outbreaks. Five outbreaks occurred in populations that had been previously exposed to smallpox, while the remaining eight occurred in Native-American populations that were naïve to the disease at the time. The Native-American outbreaks were close in a spatial and temporal sense. Using Bayesian Information Criterion (BIC), we show that the best model includes population-specific R0 values. These differences in R0 values may, in part, be due to differences in genetic background, social structure, or food and water availability. As a result of these inter-population differences, the overall uncertainty associated with the “population average” value of smallpox R0 is larger, a finding that can have important consequences for controlling epidemics. In general, Bayesian hierarchical models are able to properly account for the uncertainty associated with multiple epidemics, provide a clearer understanding of variability in epidemic dynamics, and yield a better assessment of the range of potential risks and consequences that decision makers face. PMID:24021521

  3. Dynamics and control of infections on social networks of population types.

    PubMed

    Williams, Brian G; Dye, Christopher

    2018-06-01

    Random mixing in host populations has been a convenient simplifying assumption in the study of epidemics, but neglects important differences in contact rates within and between population groups. For HIV/AIDS, the assumption of random mixing is inappropriate for epidemics that are concentrated in groups of people at high risk, including female sex workers (FSW) and their male clients (MCF), injecting drug users (IDU) and men who have sex with men (MSM). To find out who transmits infection to whom and how that affects the spread and containment of infection remains a major empirical challenge in the epidemiology of HIV/AIDS. Here we develop a technique, based on the routine sampling of infection in linked population groups (a social network of population types), which shows how an HIV/AIDS epidemic in Can Tho Province of Vietnam began in FSW, was propagated mainly by IDU, and ultimately generated most cases among the female partners of MCF (FPM). Calculation of the case reproduction numbers within and between groups, and for the whole network, provides insights into control that cannot be deduced simply from observations on the prevalence of infection. Specifically, the per capita rate of HIV transmission was highest from FSW to MCF, and most HIV infections occurred in FPM, but the number of infections in the whole network is best reduced by interrupting transmission to and from IDU. This analysis can be used to guide HIV/AIDS interventions using needle and syringe exchange, condom distribution and antiretroviral therapy. The method requires only routine data and could be applied to infections in other populations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Kinetic multi-layer model of aerosol surface and bulk chemistry (KM-SUB): the influence of interfacial transport and bulk diffusion on the oxidation of oleic acid by ozone

    NASA Astrophysics Data System (ADS)

    Shiraiwa, Manabu; Pfrang, Christian; Pöschl, Ulrich

    2010-05-01

    Aerosols are ubiquitous in the atmosphere and have strong effects on climate and public health. Gas-particle interactions can significantly change the physical and chemical properties of aerosols such as toxicity, reactivity, hygroscopicity and radiative properties. Chemical reactions and mass transport lead to continuous transformation and changes in the composition of atmospheric aerosols ("chemical aging"). Resistor model formulations are widely used to describe and investigate heterogeneous reactions and multiphase processes in laboratory, field and model studies of atmospheric chemistry. The traditional resistor models, however, are usually based on simplifying assumptions such as steady state conditions, homogeneous mixing, and limited numbers of non-interacting species and processes. In order to overcome these limitations, Pöschl, Rudich and Ammann have developed a kinetic model framework (PRA framework) with a double-layer surface concept and universally applicable rate equations and parameters for mass transport and chemical reactions at the gas-particle interface of aerosols and clouds [1]. Based on the PRA framework, we present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB) [2]. The model includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical life-times of multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (~10-10 cm2 s-1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models. References [1] Pöschl et al., Atmos. Chem. and Phys., 7, 5989-6023 (2007). [2] Shiraiwa et al., Atmos. Chem. Phys. Discuss., 10, 281-326 (2010).

  5. Panel Absorber

    NASA Astrophysics Data System (ADS)

    MECHEL, F. P.

    2001-11-01

    A plane wave is incident on a simply supported elastic plate covering a back volume; the arrangement is surrounded by a hard baffle wall. The plate may be porous with a flow friction resistance; the back volume may be filled either with air or with a porous material. The back volume may be bulk reacting (i.e., with sound propagation parallel to the plate) or locally reacting. Since this arrangement is of some importance in room acoustics, Cremer in his book about room acoustics [1] has presented an approximate analysis. However, Cremer's analysis uses a number of assumptions which make his solution, in his own estimate, unsuited for low frequencies, where, on the other hand, the arrangement mainly is applied. This paper presents a sound field description which uses modal analysis. It is applicable not only in the far field, but also near the absorber. Further, approximate solutions are derived, based on simplifying assumptions like Cremer has used. The modal analysis solution is of interest not only as a reference for approximations but also for practical applications, because the aspect of computing time becomes more and more unimportant (the 3D-plots presented below for the sound field were evaluated with modal analysis in about 6 s).

  6. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  7. [Simplified laparoscopic gastric bypass. Initial experience].

    PubMed

    Hernández-Miguelena, Luis; Maldonado-Vázquez, Angélica; Cortes-Romano, Pablo; Ríos-Cruz, Daniel; Marín-Domínguez, Raúl; Castillo-González, Armando

    2014-01-01

    Obesity surgery includes various gastrointestinal procedures. Roux-en-Y gastric bypass is the prototype of mixed procedures being the most practiced worldwide. A similar and novel technique has been adopted by Dr. Almino Cardoso Ramos and Dr. Manoel Galvao called "simplified bypass," which has been accepted due to the greater ease and very similar results to the conventional technique. The aim of this study is to describe the results of the simplified gastric bypass for treatment of morbid obesity in our institution. We performed a descriptive, retrospective study of all patients undergoing simplified gastric bypass from January 2008 to July 2012 in the obesity clinic of a private hospital in Mexico City. A total of 90 patients diagnosed with morbid obesity underwent simplified gastric bypass. Complications occurred in 10% of patients; these were more frequent bleeding and internal hernia. Mortality in the study period was 0%. The average weight loss at 12 months was 72.7%. Simplified gastric bypass surgery is safe with good mid-term results and a loss of adequate weight in 71% of cases.

  8. Induced simplified neutrosophic correlated aggregation operators for multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Zhang, Hong-yu

    2018-03-01

    Induced Choquet integral is a powerful tool to deal with imprecise or uncertain nature. This study proposes a combination process of the induced Choquet integral and neutrosophic information. We first give the operational properties of simplified neutrosophic numbers (SNNs). Then, we develop some new information aggregation operators, including an induced simplified neutrosophic correlated averaging (I-SNCA) operator and an induced simplified neutrosophic correlated geometric (I-SNCG) operator. These operators not only consider the importance of elements or their ordered positions, but also take into account the interactions phenomena among decision criteria or their ordered positions under multiple decision-makers. Moreover, we present a detailed analysis of I-SNCA and I-SNCG operators, including the properties of idempotency, commutativity and monotonicity, and study the relationships among the proposed operators and existing simplified neutrosophic aggregation operators. In order to handle the multi-criteria group decision-making (MCGDM) situations where the weights of criteria and decision-makers usually correlative and the criterion values are considered as SNNs, an approach is established based on I-SNCA operator. Finally, a numerical example is presented to demonstrate the proposed approach and to verify its effectiveness and practicality.

  9. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    NASA Astrophysics Data System (ADS)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual sub-models, such as those mentioned above, allow us to answer basic research questions without the parameterization necessary in higher resolution models. Thus, this type of model offers an alternative to higher-resolution models.

  10. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.

  11. Spontaneously Broken Neutral Symmetry in an Ecological System

    NASA Astrophysics Data System (ADS)

    Borile, C.; Muñoz, M. A.; Azaele, S.; Banavar, Jayanth R.; Maritan, A.

    2012-07-01

    Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest)—regardless of the species they belong to—have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species, treating the dynamics in a species-symmetric manner. We demonstrate that dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.

  12. Research study on high energy radiation effect and environment solar cell degradation methods

    NASA Technical Reports Server (NTRS)

    Horne, W. E.; Wilkinson, M. C.

    1974-01-01

    The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.

  13. A methodology to select a wire insulation for use in habitable spacecraft.

    PubMed

    Paulos, T; Apostolakis, G

    1998-08-01

    This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.

  14. Towards a theory of tiered testing.

    PubMed

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  15. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  16. Quantum vacuum interaction between two cosmic strings revisited

    NASA Astrophysics Data System (ADS)

    Muñoz-Castañeda, J. M.; Bordag, M.

    2014-03-01

    We reconsider the quantum vacuum interaction energy between two straight parallel cosmic strings. This problem was discussed several times in an approach treating both strings perturbatively and treating only one perturbatively. Here we point out that a simplifying assumption made by Bordag [Ann. Phys. (Berlin) 47, 93 (1990).] can be justified and show that, despite the global character of the background, the perturbative approach delivers a correct result. We consider the applicability of the scattering methods, developed in the past decade for the Casimir effect, for the cosmic string and find it not applicable. We calculate the scattering T-operator on one string. Finally, we consider the vacuum interaction of two strings when each carries a two-dimensional delta function potential.

  17. Trends and Techniques for Space Base Electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    Simulations of various phosphorus and boron diffusions in SOS were completed and a sputtering system, furnaces, and photolithography related equipment were set up. Double layer metal experiments initially utilized wet chemistry techniques. By incorporating ultrasonic etching of the vias, premetal cleaning a modified buffered HF, phosphorus doped vapox, and extended sintering, yields of 98% were obtained using the standard test pattern. A two dimensional modeling program was written for simulating short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide silicon interface. Although the program is incomplete, the two dimensional Poisson equation for the potential distribution was achieved. The status of other Z-D MOSFET simulation programs is summarized.

  18. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  19. Homogeneous-heterogeneous reactions in curved channel with porous medium

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Ayub, Sadia; Alsaedi, A.

    2018-06-01

    Purpose of the present investigation is to examine the peristaltic flow through porous medium in a curved conduit. Problem is modeled for incompressible electrically conducting Ellis fluid. Influence of porous medium is tackled via modified Darcy's law. The considered model utilizes homogeneous-heterogeneous reactions with equal diffusivities for reactant and autocatalysis. Constitutive equations are formulated in the presence of viscous dissipation. Channel walls are compliant in nature. Governing equations are modeled and simplified under the assumptions of small Reynolds number and large wavelength. Graphical results for velocity, temperature, heat transfer coefficient and homogeneous-heterogeneous reaction parameters are examined for the emerging parameters entering into the problem. Results reveal an activation in both homogenous-heterogenous reaction effect and heat transfer rate with increasing curvature of the channel.

  20. Characterization of geostationary particle signatures based on the 'injection boundary' model

    NASA Technical Reports Server (NTRS)

    Mauk, B. H.; Meng, C.-I.

    1983-01-01

    A simplified analytical procedure is used to characterize the details of geostationary particle signatures, in order to lend support to the 'injection boundary' concept. The signatures are generated by the time-of-flight effects evolving from an initial sharply defined, double spiraled boundary configuration. Complex and highly variable dispersion patterns often observed by geostationary satellites are successfully reproduced through the exclusive use of the most fundamental convection configuration characteristics. Many of the details of the patterns have not been previously presented. It is concluded that most of the dynamical dispersion features can be mapped to the double spiral boundary without further ad hoc assumptions, and that predicted and observed dispersion patterns exhibit symmetries distinct from those associated with the quasi-stationary particle convection patterns.

  1. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    NASA Technical Reports Server (NTRS)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  2. Simplified models for dark matter searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre

    This document a outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions formore » implementation are presented.« less

  3. Simplified Models for Dark Matter Searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  4. Simplified Models for Dark Matter Searches at the LHC

    DOE PAGES

    Abdallah, Jalal

    2015-08-11

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  5. Reticulate evolution and the human past: an anthropological perspective.

    PubMed

    Winder, Isabelle C; Winder, Nick P

    2014-01-01

    The evidence is mounting that reticulate (web-like) evolution has shaped the biological histories of many macroscopic plants and animals, including non-human primates closely related to Homo sapiens, but the implications of this non-hierarchical evolution for anthropological enquiry are not yet fully understood. When they are understood, the result may be a paradigm shift in evolutionary anthropology. This paper reviews the evidence for reticulated evolution in the non-human primates and human lineage. Then it makes the case for extrapolating this sort of patterning to Homo sapiens and other hominins and explores the implications this would have for research design, method and understandings of evolution in anthropology. Reticulation was significant in human evolutionary history and continues to influence societies today. Anthropologists and human scientists-whether working on ancient or modern populations-thus need to consider the implications of non-hierarchic evolution, particularly where molecular clocks, mathematical models and simplifying assumptions about evolutionary processes are used. This is not just a problem for palaeoanthropology. The simple fact of different mating systems among modern human groups, for example, may demand that more attention is paid to the potential for complexity in human genetic and cultural histories.

  6. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  7. Liquefied Bleed for Stability and Efficiency of High Speed Inlets

    NASA Technical Reports Server (NTRS)

    Saunders, J. David; Davis, David; Barsi, Stephen J.; Deans, Matthew C.; Weir, Lois J.; Sanders, Bobby W.

    2014-01-01

    A mission analysis code was developed to perform a trade study on the effectiveness of liquefying bleed for the inlet of the first stage of a TSTO vehicle. By liquefying bleed, the vehicle weight (TOGW) could be reduced by 7 to 23%. Numerous simplifying assumptions were made and lessons were learned. Increased accuracy in future analyses can be achieved by: Including a higher fidelity model to capture the effect of rescaling (variable vehicle TOGW). Refining specific thrust and impulse models ( T m a and Isp) to preserve fuel-to-air ratio. Implementing LH2 for T m a and Isp. Correlating baseline design to other mission analyses and correcting vehicle design elements. Implementing angle-of-attack effects on inlet characteristics. Refining aerodynamic performance (to improve L/D ratio at higher Mach numbers). Examining the benefit with partial cooling or densification of the bleed air stream. Incorporating higher fidelity weight estimates for the liquefied bleed system (heat exchange and liquid storage versus bleed duct weights) could be added when more fully developed. Adding trim drag or 6-degree-of-freedom trajectory analysis for higher fidelity. Investigating vehicle optimization for each of the bleed configurations.

  8. Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges

    NASA Astrophysics Data System (ADS)

    Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.

    2017-06-01

    We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.

  9. Exact Solutions for Wind-Driven Coastal Upwelling and Downwelling over Sloping Topography

    NASA Astrophysics Data System (ADS)

    Choboter, P.; Duke, D.; Horton, J.; Sinz, P.

    2009-12-01

    The dynamics of wind-driven coastal upwelling and downwelling are studied using a simplified dynamical model. Exact solutions are examined as a function of time and over a family of sloping topographies. Assumptions in the two-dimensional model include a frictionless ocean interior below the surface Ekman layer, and no alongshore dependence of the variables; however, dependence in the cross-shore and vertical directions is retained. Additionally, density and alongshore momentum are advected by the cross-shore velocity in order to maintain thermal wind. The time-dependent initial-value problem is solved with constant initial stratification and no initial alongshore flow. An alongshore pressure gradient is added to allow the cross-shore flow to be geostrophically balanced far from shore. Previously, this model has been used to study upwelling over flat-bottom and sloping topographies, but the novel feature in this work is the discovery of exact solutions for downwelling. These exact solutions are compared to numerical solutions from a primitive-equation ocean model, based on the Princeton Ocean Model, configured in a similar two-dimensional geometry. Many typical features of the evolution of density and velocity during downwelling are displayed by the analytical model.

  10. Investigation of the asymptotic state of rotating turbulence using large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.; Chasnov, Jeffrey R.; Mansour, Nagi N.; Cambon, Claude

    1993-01-01

    Study of turbulent flows in rotating reference frames has long been an area of considerable scientific and engineering interest. Because of its importance, the subject of turbulence in rotating reference frames has motivated over the years a large number of theoretical, experimental, and computational studies. The bulk of these previous works has served to demonstrate that the effect of system rotation on turbulence is subtle and remains exceedingly difficult to predict. A rotating flow of particular interest in many studies, including the present work, is examination of the effect of solid-body rotation on an initially isotropic turbulent flow. One of the principal reasons for the interest in this flow is that it represents the most basic turbulent flow whose structure is altered by system rotation but without the complicating effects introduced by mean strains or flow inhomogeneities. The assumption of statistical homogeneity considerably simplifies analysis and computation. The principal objective of the present study has been to examine the asymptotic state of solid-body rotation applied to an initially isotropic, high Reynolds number turbulent flow. Of particular interest has been to determine the degree of two-dimensionalization and the existence of asymptotic self-similar states in homogeneous rotating turbulence.

  11. Optimal allocation of trend following strategies

    NASA Astrophysics Data System (ADS)

    Grebenkov, Denis S.; Serror, Jeremy

    2015-09-01

    We consider a portfolio allocation problem for trend following (TF) strategies on multiple correlated assets. Under simplifying assumptions of a Gaussian market and linear TF strategies, we derive analytical formulas for the mean and variance of the portfolio return. We construct then the optimal portfolio that maximizes risk-adjusted return by accounting for inter-asset correlations. The dynamic allocation problem for n assets is shown to be equivalent to the classical static allocation problem for n2 virtual assets that include lead-lag corrections in positions of TF strategies. The respective roles of asset auto-correlations and inter-asset correlations are investigated in depth for the two-asset case and a sector model. In contrast to the principle of diversification suggesting to treat uncorrelated assets, we show that inter-asset correlations allow one to estimate apparent trends more reliably and to adjust the TF positions more efficiently. If properly accounted for, inter-asset correlations are not deteriorative but beneficial for portfolio management that can open new profit opportunities for trend followers. These concepts are illustrated using daily returns of three highly correlated futures markets: the E-mini S&P 500, Euro Stoxx 50 index, and the US 10-year T-note futures.

  12. Effects from Unsaturated Zone Flow during Oscillatory Hydraulic Testing

    NASA Astrophysics Data System (ADS)

    Lim, D.; Zhou, Y.; Cardiff, M. A.; Barrash, W.

    2014-12-01

    In analyzing pumping tests on unconfined aquifers, the impact of the unsaturated zone is often neglected. Instead, desaturation at the water table is often treated as a free-surface boundary, which is simple and allows for relatively fast computation. Richards' equation models, which account for unsaturated flow, can be compared with saturated flow models to validate the use of Darcy's Law. In this presentation, we examine the appropriateness of using fast linear steady-periodic models based on linearized water table conditions in order to simulate oscillatory pumping tests in phreatic aquifers. We compare oscillatory pumping test models including: 1) a 2-D radially-symmetric phreatic aquifer model with a partially penetrating well, simulated using both Darcy's Law and Richards' Equation in COMSOL; and 2) a linear phase-domain numerical model developed in MATLAB. Both COMSOL and MATLAB models are calibrated to match oscillatory pumping test data collected in the summer of 2013 at the Boise Hydrogeophysical Research Site (BHRS), and we examine the effect of model type on the associated parameter estimates. The results of this research will aid unconfined aquifer characterization efforts and help to constrain the impact of the simplifying physical assumptions often employed during test analysis.

  13. HZETRN: Description of a free-space ion and nucleon transport and shielding computer program

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Cucinotta, Francis A.; Shinn, Judy L.; Badhwar, Gautam D.; Silberberg, R.; Tsao, C. H.; Townsend, Lawrence W.; Tripathi, Ram K.

    1995-01-01

    The high-charge-and energy (HZE) transport computer program HZETRN is developed to address the problems of free-space radiation transport and shielding. The HZETRN program is intended specifically for the design engineer who is interested in obtaining fast and accurate dosimetric information for the design and construction of space modules and devices. The program is based on a one-dimensional space-marching formulation of the Boltzmann transport equation with a straight-ahead approximation. The effect of the long-range Coulomb force and electron interaction is treated as a continuous slowing-down process. Atomic (electronic) stopping power coefficients with energies above a few A MeV are calculated by using Bethe's theory including Bragg's rule, Ziegler's shell corrections, and effective charge. Nuclear absorption cross sections are obtained from fits to quantum calculations and total cross sections are obtained with a Ramsauer formalism. Nuclear fragmentation cross sections are calculated with a semiempirical abrasion-ablation fragmentation model. The relation of the final computer code to the Boltzmann equation is discussed in the context of simplifying assumptions. A detailed description of the flow of the computer code, input requirements, sample output, and compatibility requirements for non-VAX platforms are provided.

  14. AMS-02 fits dark matter

    NASA Astrophysics Data System (ADS)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  15. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  16. The geography of spatial synchrony.

    PubMed

    Walter, Jonathan A; Sheppard, Lawrence W; Anderson, Thomas L; Kastens, Jude H; Bjørnstad, Ottar N; Liebhold, Andrew M; Reuman, Daniel C

    2017-07-01

    Spatial synchrony, defined as correlated temporal fluctuations among populations, is a fundamental feature of population dynamics, but many aspects of synchrony remain poorly understood. Few studies have examined detailed geographical patterns of synchrony; instead most focus on how synchrony declines with increasing linear distance between locations, making the simplifying assumption that distance decay is isotropic. By synthesising and extending prior work, we show how geography of synchrony, a term which we use to refer to detailed spatial variation in patterns of synchrony, can be leveraged to understand ecological processes including identification of drivers of synchrony, a long-standing challenge. We focus on three main objectives: (1) showing conceptually and theoretically four mechanisms that can generate geographies of synchrony; (2) documenting complex and pronounced geographies of synchrony in two important study systems; and (3) demonstrating a variety of methods capable of revealing the geography of synchrony and, through it, underlying organism ecology. For example, we introduce a new type of network, the synchrony network, the structure of which provides ecological insight. By documenting the importance of geographies of synchrony, advancing conceptual frameworks, and demonstrating powerful methods, we aim to help elevate the geography of synchrony into a mainstream area of study and application. © 2017 John Wiley & Sons Ltd/CNRS.

  17. Simplification improves understanding of informed consent information in clinical trials regardless of health literacy level.

    PubMed

    Kim, Eun Jin; Kim, Su Hyun

    2015-06-01

    This study evaluated the effect of a simplified informed consent form for clinical trials on the understanding and efficacy of informed consent information across health literacy levels. A total of 150 participants were randomly assigned to one of two groups and provided with either standard or simplified consent forms for a cancer clinical trial. The features of the simplified informed consent form included plain language, short sentences, diagrams, pictures, and bullet points. Levels of objective and subjective understanding were significantly higher in participants provided with simplified informed consent forms relative to those provided with standard informed consent forms. The interaction effects between type of consent form and health literacy level on objective and subjective understanding were nonsignificant. Simplified informed consent was effective in enhancing participant's subjective and objective understanding regardless of health literacy. © The Author(s) 2015.

  18. 46 CFR 178.215 - Weight of passengers and crew.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...

  19. 46 CFR 178.215 - Weight of passengers and crew.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...

  20. 46 CFR 178.215 - Weight of passengers and crew.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...

  1. 46 CFR 178.215 - Weight of passengers and crew.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...

  2. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  3. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  4. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  5. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  6. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  7. The Prediction of Broadband Shock-Associated Noise Including Propagation Effects

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2011-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock- associated noise (BBSAN) that directly incorporates the vector Green's function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) as the mean flow. The vector Green's function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation. An adjoint vector Green's function solver is implemented to determine the vector Green's function based on a locally parallel mean flow at streamwise locations of the SRANS solution. However, the developed acoustic analogy could easily be based on any adjoint vector Green's function solver, such as one that makes no assumptions about the mean flow. The newly developed acoustic analogy can be simplified to one that uses the Green's function associated with the Helmholtz equation, which is consistent with the formulation of Morris and Miller (AIAAJ 2010). A large number of predictions are generated using three different nozzles over a wide range of fully expanded Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise labs. In addition, two models for the so-called 'fine-scale' mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include the propagation effects, especially in the upstream direction of the jet.

  8. A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces

    PubMed Central

    Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.

    2014-01-01

    A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895

  9. A simplified ductile-brittle transition temperature tester

    NASA Technical Reports Server (NTRS)

    Arias, A.

    1973-01-01

    The construction and operation of a versatile, simplified bend tester is described. The tester is usable at temperatures from - 192 to 650 C in air. Features of the tester include a single test chamber for cryogenic or elevated temperatures, specimen alining support rollers, and either manual or motorized operation.

  10. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability

    NASA Astrophysics Data System (ADS)

    Wu, Shanshan; Heberling, Matthew T.

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  11. Minimum reaction network necessary to describe Ar/CF4 plasma etch

    NASA Astrophysics Data System (ADS)

    Helpert, Sofia; Chopra, Meghali; Bonnecaze, Roger T.

    2018-03-01

    Predicting the etch and deposition profiles created using plasma processes is challenging due to the complexity of plasma discharges and plasma-surface interactions. Volume-averaged global models allow for efficient prediction of important processing parameters and provide a means to quickly determine the effect of a variety of process inputs on the plasma discharge. However, global models are limited based on simplifying assumptions to describe the chemical reaction network. Here a database of 128 reactions is compiled and their corresponding rate constants collected from 24 sources for an Ar/CF4 plasma using the platform RODEo (Recipe Optimization for Deposition and Etching). Six different reaction sets were tested which employed anywhere from 12 to all 128 reactions to evaluate the impact of the reaction database on particle species densities and electron temperature. Because many the reactions used in our database had conflicting rate constants as reported in literature, we also present a method to deal with those uncertainties when constructing the model which includes weighting each reaction rate and filtering outliers. By analyzing the link between a reaction's rate constant and its impact on the predicted plasma densities and electron temperatures, we determine the conditions at which a reaction is deemed necessary to the plasma model. The results of this study provide a foundation for determining which minimal set of reactions must be included in the reaction set of the plasma model.

  12. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability.

    PubMed

    Wu, Shanshan; Heberling, Matthew T

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  13. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  14. Comparison of fluid neutral models for one-dimensional plasma edge modeling with a finite volume solution of the Boltzmann equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horsten, N., E-mail: niels.horsten@kuleuven.be; Baelmans, M.; Dekeyser, W.

    2016-01-15

    We derive fluid neutral approximations for a simplified 1D edge plasma model, suitable to study the neutral behavior close to the target of a nuclear fusion divertor, and compare its solutions to the solution of the corresponding kinetic Boltzmann equation. The plasma is considered as a fixed background extracted from a detached 2D simulation. We show that the Maxwellian equilibrium distribution is already obtained very close to the target, justifying the use of a fluid approximation. We compare three fluid neutral models: (i) a diffusion model; (ii) a pressure-diffusion model (i.e., a combination of a continuity and momentum equation) assumingmore » equal neutral and ion temperatures; and (iii) the pressure-diffusion model coupled to a neutral energy equation taking into account temperature differences between neutrals and ions. Partial reflection of neutrals reaching the boundaries is included in both the kinetic and fluid models. We propose two methods to obtain an incident neutral flux boundary condition for the fluid models: one based on a diffusion approximation and the other assuming a truncated Chapman-Enskog distribution. The pressure-diffusion model predicts the plasma sources very well. The diffusion boundary condition gives slightly better results overall. Although including an energy equation still improves the results, the assumption of equal ion and neutral temperature already gives a very good approximation.« less

  15. Preliminary findings on the effects of geometry on two-phase flow through volcanic conduits

    NASA Astrophysics Data System (ADS)

    Mitchell, K. L.; Wilson, L.; Lane, S. J.; James, M. R.

    2003-04-01

    We attempt to ascertain whether some of the geometrical assumptions utilised in modelling of flows through volcanic conduits are valid. Flow is often assumed to be through a vertical conduit, but some volcanoes, such as Pu'u 'O'o (Kilauea, Hawai'i) and Stromboli (Italy), are known to exhibit inclined or more complex conduit systems. Our numerical and experimental studies have revealed that conduit inclination is a first-order influence on flow properties and eruptive style. Even a few degrees of inclination from vertical can increase gas-liquid phase separation by locally enhancing the gas volume fraction on the upper surface of the conduit. We explore the consequences of phase separation and slug flow for styles of magmatic eruption, and consider how these apply to particular eruptions. Modellers also tend to assume a simple parallel-sided geometry for volcanic conduits. Some have used a pressure-balanced assumption allowing conduits to choke and flare, resulting in higher eruption velocities. The pressure-balanced assumption is flawed in that it does not deal with the effects of compressibility and associated shocks when the flow is supersonic. Both parallel-sided and pressure-balanced assumptions avoid addressing how conduit shape evolves from an initial dyke-shaped fracture. However, we assert that evolution of conduit shape is impossible to quantify accurately using a deterministic approach. Therefore we adopt a simplified approach, with the initial conduit shape as a blade-shaped dyke, and the potential end-member as a system that is pressure-balanced up to the supersonic choking point and undetermined beyond (flow is constrained by a narrow jet envelope and not by the walls). Intermediate geometries are assumed to change quasi-steadily at locations where conduit wall stresses are high, and the consequences of these geometries are explored. We find that quite small changes in conduit geometry, which are likely to occur in volcanic systems, can have a significant effect on flow speeds.

  16. Assumptions to the annual energy outlook 1999 : with projections to 2020

    DOT National Transportation Integrated Search

    1998-12-16

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...

  17. Assumptions to the annual energy outlook 2000 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-01-01

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...

  18. Assumptions to the annual energy outlook 2001 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-12-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...

  19. Assumptions for the annual energy outlook 2003 : with projections to 2025

    DOT National Transportation Integrated Search

    2003-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...

  20. Air-sea fluxes of momentum and mass in the presence of wind waves

    NASA Astrophysics Data System (ADS)

    Zülicke, Christoph

    2010-05-01

    An air-sea interaction model (ASIM) is developed including the effect of wind waves on momentum and mass transfer. This includes the derivation of profiles of dissipation rate, flow speed and concentration from a certain height to a certain depth. Simplified assumptions on the turbulent closure, skin - bulk matching and the spectral wave model allow for an analytic treatment. Particular emphasis was put on the inclusion of primary (gravity) waves and secondary (capillary-gravity) waves. The model was tuned to match wall-flow theory and data on wave height and slope. Growing waves reduce the air-side turbulent stress and lead to an increasing drag coefficient. In the sea, breaking waves inject turbulent kinetic energy and accelerate the transfer. Cross-reference with data on wave-related momentum and energy flux, dissipation rate and transfer velocity was sufficient. The evaluation of ASIM allowed for the analytical calculation of bulk formulae for the wind-dependent gas transfer velocity including information on the air-side momentum transfer (drag coefficient) and the sea-side gas transfer (Dalton number). The following regimes have been identified: the smooth waveless regime with a transfer velocity proportional to (wind) × (diffusion)2-3, the primary wave regime with a wind speed dependence proportional to (wind)1-4 × (diffusion)1-2-(waveage)1-4 and the secondary wave regime including a more-than-linear wind speed dependence like (wind)15-8 × (diffusion)1-2 × (waveage)5-8. These findings complete the current understanding of air-sea interaction for medium winds between 2 and 20 m s^-1.

  1. Simplifier: a web tool to eliminate redundant NGS contigs.

    PubMed

    Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur

    2012-01-01

    Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, P.J.

    A forthcoming revision to the R6 Leak-before-Break Assessment Procedure is briefly described. Practical application of the LbB concepts to safety-critical nuclear plant is illustrated by examples covering both low temperature and high temperature (>450{degrees}C) operating regimes. The examples highlight a number of issues which can make the development of a satisfactory LbB case problematic: for example, coping with highly loaded components, methodology assumptions and the definition of margins, the effect of crack closure owing to weld residual stresses, complex thermal stress fields or primary bending fields, the treatment of locally high stresses at crack intersections with free surfaces, the choicemore » of local limit load solution when predicting ligament breakthrough, and the scope of calculations required to support even a simplified LbB case for high temperature steam pipe-work systems.« less

  3. Magnetosphere - Ionosphere - Thermosphere (MIT) Coupling at Jupiter

    NASA Astrophysics Data System (ADS)

    Yates, J. N.; Ray, L. C.; Achilleos, N.

    2017-12-01

    Jupiter's upper atmospheric temperature is considerably higher than that predicted by Solar Extreme Ultraviolet (EUV) heating alone. Simulations incorporating magnetosphere-ionosphere coupling effects into general circulation models have, to date, struggled to reproduce the observed atmospheric temperatures under simplifying assumptions such as azimuthal symmetry and a spin-aligned dipole magnetic field. Here we present the development of a full three-dimensional thermosphere model coupled in both hemispheres to an axisymmetric magnetosphere model. This new coupled model is based on the two-dimensional MIT model presented in Yates et al., 2014. This coupled model is a critical step towards to the development of a fully coupled 3D MIT model. We discuss and compare the resulting thermospheric flows, energy balance and MI coupling currents to those presented in previous 2D MIT models.

  4. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  5. Comparison of calculated and measured pressures on straight and swept-tip model rotor blades

    NASA Technical Reports Server (NTRS)

    Tauber, M. E.; Chang, I. C.; Caughey, D. A.; Phillipe, J. J.

    1983-01-01

    Using the quasi-steady, full potential code, ROT22, pressures were calculated on straight and swept tip model helicopter rotor blades at advance ratios of 0.40 and 0.45, and into the transonic tip speed range. The calculated pressures were compared with values measured in the tip regions of the model blades. Good agreement was found over a wide range of azimuth angles when the shocks on the blade were not too strong. However, strong shocks persisted longer than predicted by ROT22 when the blade was in the second quadrant. Since the unsteady flow effects present at high advance ratios primarily affect shock waves, the underprediction of shock strengths is attributed to the simplifying, quasi-steady, assumption made in ROT22.

  6. Ferromagnetic CNT suspended H2O+Cu nanofluid analysis through composite stenosed arteries with permeable wall

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher

    2015-08-01

    In the present article magnetic field effects for CNT suspended copper nanoparticles for blood flow through composite stenosed arteries with permeable wall are discussed. The CNT suspended copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the CNT suspended Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics is utilized. It is also observed that with the increase in slip parameter blood flows slowly in arteries and trapped bolus increases.

  7. The accuracy of the compressible Reynolds equation for predicting the local pressure in gas-lubricated textured parallel slider bearings

    PubMed Central

    Qiu, Mingfeng; Bailey, Brian N.; Stoll, Rob

    2014-01-01

    The validity of the compressible Reynolds equation to predict the local pressure in a gas-lubricated, textured parallel slider bearing is investigated. The local bearing pressure is numerically simulated using the Reynolds equation and the Navier-Stokes equations for different texture geometries and operating conditions. The respective results are compared and the simplifying assumptions inherent in the application of the Reynolds equation are quantitatively evaluated. The deviation between the local bearing pressure obtained with the Reynolds equation and the Navier-Stokes equations increases with increasing texture aspect ratio, because a significant cross-film pressure gradient and a large velocity gradient in the sliding direction develop in the lubricant film. Inertia is found to be negligible throughout this study. PMID:25049440

  8. Further analytical study of hybrid rocket combustion

    NASA Technical Reports Server (NTRS)

    Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.

    1972-01-01

    Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.

  9. Structure of thermal pair clouds around gamma-ray-emitting black holes

    NASA Technical Reports Server (NTRS)

    Liang, Edison P.

    1991-01-01

    Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.

  10. Take-home video for adult literacy

    NASA Astrophysics Data System (ADS)

    Yule, Valerie

    1996-01-01

    In the past, it has not been possible to "teach oneself to read" at home, because learners could not read the books to teach them. Videos and interactive compact discs have changed that situation and challenge current assumptions of the pedagogy of literacy. This article describes an experimental adult literacy project using video technology. The language used is English, but the basic concepts apply to any alphabetic or syllabic writing system. A half-hour cartoon video can help adults and adolescents with learning difficulties. Computer-animated cartoon graphics are attractive to look at, and simplify complex material in a clear, lively way. This video technique is also proving useful for distance learners, children, and learners of English as a second language. Methods and principles are to be extended using interactive compact discs.

  11. Brownian motion and thermophoresis effects on Peristaltic slip flow of a MHD nanofluid in a symmetric/asymmetric channel

    NASA Astrophysics Data System (ADS)

    Sucharitha, G.; Sreenadh, S.; Lakshminarayana, P.; Sushma, K.

    2017-11-01

    The slip and heat transfer effects on MHD peristaltic transport of a nanofluid in a non-uniform symmetric/asymmetric channel have studied under the assumptions of elongated wave length and negligible Reynolds number. From the simplified governing equations, the closed form solutions for velocity, stream function, temperature and concentrations are obtained. Also dual solutions are discussed for symmetric and asymmetric channel cases. The effects of important physical parameters are explained graphically. The slip parameter decreases the fluid velocity in middle of the channel whereas it increases the velocity at the channel walls. Temperature and concentration are decreasing and increasing functions of radiation parameter respectively. Moreover, velocity, temperature and concentrations are high in symmetric channel when compared with asymmetric channel.

  12. On the 'flip-flop' instability of Bondi-Hoyle accretion flows

    NASA Technical Reports Server (NTRS)

    Livio, Mario; Soker, Noam; Matsuda, Takuya; Anzer, Ulrich

    1991-01-01

    A simple physical interpretation is advanced by means of an analysis of the shock cone in the accretion flows past a compact object and with an examination of the accretion-line stability analyses. The stability of the conical shock is examined against small angular deflections with attention given to several simplifying assumptions. A line instability is identified in the Bondi-Hoyle accretion flows that leads to the formation of a large opening-angle shock. When the opening angle becomes large the instability becomes irregular oscillation. The analytical methodology is compared to previous numerical configurations that demonstrate different shock morphologies. The Bondi-Hoyle accretion onto a compact object is concluded to generate a range of nonlinear instabilities in both homogeneous and inhomogeneous cases with a quasiperiodic oscillation in the linear regime.

  13. Isotope and fast ions turbulence suppression effects: Consequences for high-β ITER plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Görler, T.; Jenko, F.

    2018-05-01

    The impact of isotope effects and fast ions on microturbulence is analyzed by means of non-linear gyrokinetic simulations for an ITER hybrid scenario at high beta obtained from previous integrated modelling simulations with simplified assumptions. Simulations show that ITER might work very close to threshold, and in these conditions, significant turbulence suppression is found from DD to DT plasmas. Electromagnetic effects are shown to play an important role in the onset of this isotope effect. Additionally, even external ExB flow shear, which is expected to be low in ITER, has a stronger impact on DT than on DD. The fast ions generated by fusion reactions can additionally reduce turbulence even more although the impact in ITER seems weaker than in present-day tokamaks.

  14. Simplified High-Power Inverter

    NASA Technical Reports Server (NTRS)

    Edwards, D. B.; Rippel, W. E.

    1984-01-01

    Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.

  15. Modified off-midline closure of pilonidal sinus disease.

    PubMed

    Saber, Aly

    2014-05-01

    Numerous surgical procedures have been described for pilonidal sinus disease, but treatment failure and disease recurrence are frequent. Conventional off-midline flap closures have relatively favorable surgical outcomes, but relatively unfavorable cosmetic outcomes. The author reported outcomes of a new simplified off-midline technique for closure of the defect after complete excision of the sinus tracts. Two hundred patients of both sexes were enrolled for modified D-shaped excisions were used to include all sinuses and their ramifications, with a simplified procedure to close the defect. The overall wound infection rate was 12%, (12.2% for males and 11.1% for females). Wound disruption was necessitating laying the whole wound open and management as open technique. The overall wound disruption rate was 6%, (6.1% for males and 5.5% for females) and the overall recurrence rate was 7%. Our simplified off-midline closure without flap appeared to be comparable to conventional off-midline closure with flap, in terms of wound infection, wound dehiscence, and recurrence. Advantages of the simplified procedure include potentially reduced surgery complexity, reduced surgery time, and improved cosmetic outcome.

  16. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    PubMed

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P < .0001). Decubitus simplified US scores showed a slightly higher correlation with Warrick scores compared to sitting simplified US scores. Mean durations of decubitus and sitting simplified protocols were 4.76 and 6.20 minutes, respectively (P < .005). Simplified 16-intercostal space protocols correlated with comprehensive protocols and high-resolution CT findings in patients with idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  17. On the coverage of the pMSSM by simplified model results

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Waltenberger, Wolfgang

    2018-03-01

    We investigate to which extent the SUSY search results published by ATLAS and CMS in the context of simplified models actually cover the more realistic scenarios of a full model. Concretely, we work within the phenomenological MSSM (pMSSM) with 19 free parameters and compare the constraints obtained from SModelS v1.1.1 with those from the ATLAS pMSSM study in arXiv:1508.06608. We find that about 40-45% of the points excluded by ATLAS escape the currently available simplified model constraints. For these points we identify the most relevant topologies which are not tested by the current simplified model results. In particular, we find that topologies with asymmetric branches, including 3-jet signatures from gluino-squark associated production, could be important for improving the current constraining power of simplified models results. Furthermore, for a better coverage of light stops and sbottoms, constraints for decays via heavier neutralinos and charginos, which subsequently decay visibly to the lightest neutralino are also needed.

  18. Can the discharge of a hyperconcentrated flow be estimated from paleoflood evidence?

    NASA Astrophysics Data System (ADS)

    Bodoque, Jose M.; Eguibar, Miguel A.; DíEz-Herrero, AndréS.; GutiéRrez-PéRez, Ignacio; RuíZ-Villanueva, Virginia

    2011-12-01

    Many flood events involving water and sediments have been characterized using classic hydraulics principles, assuming the existence of critical flow and many other simplifications. In this paper, hyperconcentrated flow discharge was evaluated by using paleoflood reconstructions (based on paleostage indicators [PSI]) combined with a detailed hydraulic analysis of the critical flow assumption. The exact location where this condition occurred was established by iteratively determining the corresponding cross section, so that specific energy is at a minimum. In addition, all of the factors and parameters involved in the process were assessed, especially those related to the momentum equation, existing shear stresses in the wetted perimeter, and nonhydrostatic and hydrostatic pressure distributions. The superelevation of the hyperconcentrated flow, due to the flow elevation curvature, was also estimated and calibrated with the PSI. The estimated peak discharge was established once the iterative process was unable to improve the fit between the simulated depth and the depth observed from the PSI. The methodological approach proposed here can be applied to other higher-gradient mountainous torrents with a similar geomorphic configuration to the one studied in this paper. Likewise, results have been derived with fewer uncertainties than those obtained from standard hydraulic approaches, whose simplifying assumptions have not been considered.

  19. Dependence of elastic hadron collisions on impact parameter

    NASA Astrophysics Data System (ADS)

    Procházka, Jiří; Lokajíček, Miloš V.; Kundrát, Vojtěch

    2016-05-01

    Elastic proton-proton collisions represent probably the greatest ensemble of available measured data, the analysis of which may provide a large amount of new physical results concerning fundamental particles. It is, however, necessary to analyze first some conclusions concerning pp collisions and their interpretations differing fundamentally from our common macroscopic experience. It has been argued, e.g., that elastic hadron collisions have been more central than inelastic ones, even if any explanation of the existence of so different processes, i.e., elastic and inelastic (with hundreds of secondary particles) collisions, under the same conditions has not been given until now. The given conclusion has been based on a greater number of simplifying mathematical assumptions (already done in earlier calculations), without their influence on physical interpretation being analyzed and entitled; the corresponding influence has started to be studied in the approach based on the eikonal model. The possibility of a peripheral interpretation of elastic collisions will be demonstrated and the corresponding results summarized. The arguments will be given on why no preference may be given to the mentioned centrality against the standard peripheral behaviour. The corresponding discussion on the contemporary description of elastic hadronic collision in dependence on the impact parameter will be summarized and the justification of some important assumptions will be considered.

  20. Estimating causal contrasts involving intermediate variables in the presence of selection bias.

    PubMed

    Valeri, Linda; Coull, Brent A

    2016-11-20

    An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach

    NASA Astrophysics Data System (ADS)

    van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.

    2018-03-01

    The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.

  2. Hard, harder, hardest: principal stratification, statistical identifiability, and the inherent difficulty of finding surrogate endpoints.

    PubMed

    Wolfson, Julian; Henn, Lisa

    2014-01-01

    In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging.

  3. Hard, harder, hardest: principal stratification, statistical identifiability, and the inherent difficulty of finding surrogate endpoints

    PubMed Central

    2014-01-01

    In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging. PMID:25342953

  4. Decision heuristic or preference? Attribute non-attendance in discrete choice problems.

    PubMed

    Heidenreich, Sebastian; Watson, Verity; Ryan, Mandy; Phimister, Euan

    2018-01-01

    This paper investigates if respondents' choice to not consider all characteristics of a multiattribute health service may represent preferences. Over the last decade, an increasing number of studies account for attribute non-attendance (ANA) when using discrete choice experiments to elicit individuals' preferences. Most studies assume such behaviour is a heuristic and therefore uninformative. This assumption may result in misleading welfare estimates if ANA reflects preferences. This is the first paper to assess if ANA is a heuristic or genuine preference without relying on respondents' self-stated motivation and the first study to explore this question within a health context. Based on findings from cognitive psychology, we expect that familiar respondents are less likely to use a decision heuristic to simplify choices than unfamiliar respondents. We employ a latent class model of discrete choice experiment data concerned with National Health Service managers' preferences for support services that assist with performance concerns. We present quantitative and qualitative evidence that in our study ANA mostly represents preferences. We also show that wrong assumptions about ANA result in inadequate welfare measures that can result in suboptimal policy advice. Future research should proceed with caution when assuming that ANA is a heuristic. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Transient competitive complexation in biological kinetic isotope fractionation explains non-steady isotopic effects: Theory and application to denitrification in soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggi, F.M.; Riley, W.J.

    2009-06-01

    The theoretical formulation of biological kinetic reactions in isotopic applications often assume first-order or Michaelis-Menten-Monod kinetics under the quasi-steady-state assumption to simplify the system kinetics. However, isotopic e ects have the same order of magnitude as the potential error introduced by these simpli cations. Both formulations lead to a constant fractionation factor which may yield incorrect estimations of the isotopic effect and a misleading interpretation of the isotopic signature of a reaction. We have analyzed the isotopic signature of denitri cation in biogeochemical soil systems by Menyailo and Hungate [2006], where high {sup 15}N{sub 2}O enrichment during N{sub 2}O productionmore » and inverse isotope fractionation during N{sub 2}O consumption could not be explained with first-order kinetics and the Rayleigh equation, or with the quasi-steady-state Michaelis-Menten-Monod kinetics. When the quasi-steady-state assumption was relaxed, transient Michaelis-Menten-Monod kinetics accurately reproduced the observations and aided in interpretation of experimental isotopic signatures. These results may imply a substantial revision in using the Rayleigh equation for interpretation of isotopic signatures and in modeling biological kinetic isotope fractionation with first-order kinetics or quasi-steady-state Michaelis-Menten-Monod kinetics.« less

  6. Complex Adaptive System Models and the Genetic Analysis of Plasma HDL-Cholesterol Concentration

    PubMed Central

    Rea, Thomas J.; Brown, Christine M.; Sing, Charles F.

    2006-01-01

    Despite remarkable advances in diagnosis and therapy, ischemic heart disease (IHD) remains a leading cause of morbidity and mortality in industrialized countries. Recent efforts to estimate the influence of genetic variation on IHD risk have focused on predicting individual plasma high-density lipoprotein cholesterol (HDL-C) concentration. Plasma HDL-C concentration (mg/dl), a quantitative risk factor for IHD, has a complex multifactorial etiology that involves the actions of many genes. Single gene variations may be necessary but are not individually sufficient to predict a statistically significant increase in risk of disease. The complexity of phenotype-genotype-environment relationships involved in determining plasma HDL-C concentration has challenged commonly held assumptions about genetic causation and has led to the question of which combination of variations, in which subset of genes, in which environmental strata of a particular population significantly improves our ability to predict high or low risk phenotypes. We document the limitations of inferences from genetic research based on commonly accepted biological models, consider how evidence for real-world dynamical interactions between HDL-C determinants challenges the simplifying assumptions implicit in traditional linear statistical genetic models, and conclude by considering research options for evaluating the utility of genetic information in predicting traits with complex etiologies. PMID:17146134

  7. Two's company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data.

    PubMed

    Giusti, Chad; Ghrist, Robert; Bassett, Danielle S

    2016-08-01

    The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.

  8. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  9. An overview of self-consistent methods for fiber-reinforced composites

    NASA Technical Reports Server (NTRS)

    Gramoll, Kurt C.; Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    The Walker et al. (1989) self-consistent method to predict both the elastic and the inelastic effective material properties of composites is examined and compared with the results of other self-consistent and elastically based solutions. The elastic part of their method is shown to be identical to other self-consistent methods for non-dilute reinforced composite materials; they are the Hill (1965), Budiansky (1965), and Nemat-Nasser et al. (1982) derivations. A simplified form of the non-dilute self-consistent method is also derived. The predicted, elastic, effective material properties for fiber reinforced material using the Walker method was found to deviate from the elasticity solution for the v sub 31, K sub 12, and mu sub 31 material properties (fiber is in the 3 direction) especially at the larger volume fractions. Also, the prediction for the transverse shear modulus, mu sub 12, exceeds one of the accepted Hashin bounds. Only the longitudinal elastic modulus E sub 33 agrees with the elasticity solution. The differences between the Walker and the elasticity solutions are primarily due to the assumption used in the derivation of the self-consistent method, i.e., the strain fields in the inclusions and the matrix are assumed to remain constant, which is not a correct assumption for a high concentration of inclusions.

  10. One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Farhat, Hamidullah

    1990-01-01

    Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.

  11. Scope of inextensible frame hypothesis in local action analysis of spherical reservoirs

    NASA Astrophysics Data System (ADS)

    Vinogradov, Yu. I.

    2017-05-01

    Spherical reservoirs, as objects perfect with respect to their weight, are used in spacecrafts, where thin-walled elements are joined by frames into multifunction structures. The junctions are local, which results in origination of stress concentration regions and the corresponding rigidity problems. The thin-walled elements are reinforced by frame to decrease the stresses in them. To simplify the analysis of the mathematical model of common deformation of the shell (which is a mathematical idealization of the reservoir) and the frame, the assumption that the frame axial line is inextensible is used widely (in particular, in the manual literature). The unjustified use of this assumption significantly distorts the concept of the stress-strain state. In this paper, an example of a lens-shaped structure formed as two spherical shell segments connected by a frame of square profile is used to carry out a numerical comparative analysis of the solutions with and without the inextensible frame hypothesis taken into account. The scope of the hypothesis is shown depending on the structure geometric parameters and the load location degree. The obtained results can be used to determine the stress-strain state of the thin-walled structure with an a priori prescribed error, for example, in research and experimental design of aerospace systems.

  12. Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan

    2016-10-01

    The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.

  13. Integrating the social sciences to understand human-water dynamics

    NASA Astrophysics Data System (ADS)

    Carr, G.; Kuil, L., Jr.

    2017-12-01

    Many interesting and exciting socio-hydrological models have been developed in recent years. Such models often aim to capture the dynamic interplay between people and water for a variety of hydrological settings. As such, peoples' behaviours and decisions are brought into the models as drivers of and/or respondents to the hydrological system. To develop and run such models over a sufficiently long time duration to observe how the water-human system evolves the human component is often simplified according to one or two key behaviours, characteristics or decisions (e.g. a decision to move away from a drought or flood area; a decision to pump groundwater, or a decision to plant a less water demanding crop). To simplify the social component, socio-hydrological modellers often pull knowledge and understanding from existing social science theories. This requires them to negotiate complex territory, where social theories may be underdeveloped, contested, dynamically evolving, or case specific and difficult to generalise or upscale. A key question is therefore, how can this process be supported so that the resulting socio-hydrological models adequately describe the system and lead to meaningful understanding of how and why it behaves as it does? Collaborative interdisciplinary research teams that bring together social and natural scientists are likely to be critical. Joint development of the model framework requires specific attention to clarification to expose all underlying assumptions, constructive discussion and negotiation to reach agreement on the modelled system and its boundaries. Mutual benefits to social scientists can be highlighted, i.e. socio-hydrological work can provide insights for further exploring and testing social theories. Collaborative work will also help ensure underlying social theory is made explicit, and may identify ways to include and compare multiple theories. As socio-hydrology progresses towards supporting policy development, approaches that brings in stakeholders and non-scientist participants to develop the conceptual modelling framework will become essential. They are also critical for fully understanding human-water dynamics.

  14. Development and Validation of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury

    DTIC Science & Technology

    2018-03-01

    of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious

  15. Simplified Calculation Of Solar Fluxes In Solar Receivers

    NASA Technical Reports Server (NTRS)

    Bhandari, Pradeep

    1990-01-01

    Simplified Calculation of Solar Flux Distribution on Side Wall of Cylindrical Cavity Solar Receivers computer program employs simple solar-flux-calculation algorithm for cylindrical-cavity-type solar receiver. Results compare favorably with those of more complicated programs. Applications include study of solar energy and transfer of heat, and space power/solar-dynamics engineering. Written in FORTRAN 77.

  16. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  17. Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    PubMed

    Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O

    2016-06-06

    The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches.

  18. Fast Bayesian approach for modal identification using free vibration data, Part I - Most probable value

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai

    2016-03-01

    The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.

  19. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks

    PubMed Central

    2017-01-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees. PMID:28545083

  20. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks.

    PubMed

    Klinkenberg, Don; Backer, Jantien A; Didelot, Xavier; Colijn, Caroline; Wallinga, Jacco

    2017-05-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees.

  1. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  2. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    PubMed Central

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  3. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  4. Integrodifferential formulations of the continuous-time random walk for solute transport subject to bimolecular A +B →0 reactions: From micro- to mesoscopic

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Berkowitz, Brian

    2015-03-01

    We develop continuous-time random walk (CTRW) equations governing the transport of two species that annihilate when in proximity to one another. In comparison with catalytic or spontaneous transformation reactions that have been previously considered in concert with CTRW, both species have spatially variant concentrations that require consideration. We develop two distinct formulations. The first treats transport and reaction microscopically, potentially capturing behavior at sharp fronts, but at the cost of being strongly nonlinear. The second, mesoscopic, formulation relies on a separation-of-scales technique we develop to separate microscopic-scale reaction and upscaled transport. This simplifies the governing equations and allows treatment of more general reaction dynamics, but requires stronger smoothness assumptions of the solution. The mesoscopic formulation is easily tractable using an existing solution from the literature (we also provide an alternative derivation), and the generalized master equation (GME) for particles undergoing A +B →0 reactions is presented. We show that this GME simplifies, under appropriate circumstances, to both the GME for the unreactive CTRW and to the advection-dispersion-reaction equation. An additional major contribution of this work is on the numerical side: to corroborate our development, we develop an indirect particle-tracking-partial-integro-differential-equation (PIDE) hybrid verification technique which could be applicable widely in reactive anomalous transport. Numerical simulations support the mesoscopic analysis.

  5. Assumptions to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  6. The impact of simplified boundary conditions and aortic arch inclusion on CFD simulations in the mouse aorta: a comparison with mouse-specific reference data.

    PubMed

    Trachet, Bram; Bols, Joris; De Santis, Gianluca; Vandenberghe, Stefaan; Loeys, Bart; Segers, Patrick

    2011-12-01

    Computational fluid dynamics (CFD) simulations allow for calculation of a detailed flow field in the mouse aorta and can thus be used to investigate a potential link between local hemodynamics and disease development. To perform these simulations in a murine setting, one often needs to make assumptions (e.g. when mouse-specific boundary conditions are not available), but many of these assumptions have not been validated due to a lack of reference data. In this study, we present such a reference data set by combining high-frequency ultrasound and contrast-enhanced micro-CT to measure (in vivo) the time-dependent volumetric flow waveforms in the complete aorta (including seven major side branches) of 10 male ApoE -/- deficient mice on a C57Bl/6 background. In order to assess the influence of some assumptions that are commonly applied in literature, four different CFD simulations were set up for each animal: (i) imposing the measured volumetric flow waveforms, (ii) imposing the average flow fractions over all 10 animals, presented as a reference data set, (iii) imposing flow fractions calculated by Murray's law, and (iv) restricting the geometrical model to the abdominal aorta (imposing measured flows). We found that - even if there is sometimes significant variation in the flow fractions going to a particular branch - the influence of using average flow fractions on the CFD simulations is limited and often restricted to the side branches. On the other hand, Murray's law underestimates the fraction going to the brachiocephalic trunk and strongly overestimates the fraction going to the distal aorta, influencing the outcome of the CFD results significantly. Changing the exponential factor in Murray's law equation from 3 to 2 (as suggested by several authors in literature) yields results that correspond much better to those obtained imposing the average flow fractions. Restricting the geometrical model to the abdominal aorta did not influence the outcome of the CFD simulations. In conclusion, the presented reference dataset can be used to impose boundary conditions in the mouse aorta in future studies, keeping in mind that they represent a subsample of the total population, i.e., relatively old, non-diseased, male C57Bl/6 ApoE -/- mice.

  7. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  8. A simplified Forest Inventory and Analysis database: FIADB-Lite

    Treesearch

    Patrick D. Miles

    2008-01-01

    This publication is a simplified version of the Forest Inventory and Analysis Data Base (FIADB) for users who do not need to compute sampling errors and may find the FIADB unnecessarily complex. Possible users include GIS specialists who may be interested only in identifying and retrieving geographic information and per acre values for the set of plots used in...

  9. Impacts of the mixing state and chemical composition on the cloud condensation nuclei (CCN) activity in Beijing during winter, 2016

    NASA Astrophysics Data System (ADS)

    Ren, J.; Zhang, F.

    2017-12-01

    Abstract.Understanding aerosol chemical composition and mixing state on CCN activity in polluted urban area is crucial to determine NCCN accurately and thus to quantify aerosol indirect effects. Aerosol hrgroscopicity, size-resolved cloud condensation nuclei (CCN) concentration and chemical composition are measured under polluted and background conditions in Beijing based on the Air Pollution and Human Health (APHH) field campaign in winter 2016. The CCN number concentration (NCCN) is predicted by using κ-Köhler theory from the PNSD and five simplified of the mixing state and chemical composition. The assumption of EIS (sulfate, nitrate and SOA internally mixed, and POA and BC externally mixed with size-resolved chemical composition) shows the best closure to predict NCCN with the ratio of predicted to measured NCCN of 0.96-1.12 both in POL and BG conditions. Under BG conditions, IB (internal mixture with bulk chemical composition) scheme achieves the best CCN closure during any periods of a day. In polluted days, EIS and IS (internal mixture with size-resolved chemical composition) scheme may achieve better closure than IB scheme due to the heterogeneity in particles composition across different size. ES (external mixture with size-resolved chemical composition) and EB (external mixture with bulk chemical composition) scheme markedly underestimate the NCCN with the ratio of predicted to measured NCCN of 0.6-0.8. In addition, we note that assumptions of size-resolved composition (IS or ES) show very limited promotes by comparing with the assumptions of bulk composition (IB or EB), furthermore, the prediction becomes worse by using size-resolved assumption in clean days. The predicted NCCN during eve-rush periods shows the most sensitivity to the five different assumptions, with ratios of the predicted and measured NCCN ranging from 0.5 to 1.4, reflecting great impacts from evening traffic and cooking sources. The result from the sensitivity examination of predict NCCN to particles mixing state and organic volume fractions with the aging of organic particles suggests that the mixing state of particles plays a minor role when the κorg exceeds 0.1. Our study could provide new dataset to evaluate the CCN parameterization in models in those heavily polluted regions with large fraction of POA and BC.

  10. Notes on SAW Tag Interrogation Techniques

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2010-01-01

    We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.

  11. Modified Off-Midline Closure of Pilonidal Sinus Disease

    PubMed Central

    Saber, Aly

    2014-01-01

    Background: Numerous surgical procedures have been described for pilonidal sinus disease, but treatment failure and disease recurrence are frequent. Conventional off-midline flap closures have relatively favorable surgical outcomes, but relatively unfavorable cosmetic outcomes. Aim: The author reported outcomes of a new simplified off-midline technique for closure of the defect after complete excision of the sinus tracts. Patients and Methods: Two hundred patients of both sexes were enrolled for modified D-shaped excisions were used to include all sinuses and their ramifications, with a simplified procedure to close the defect. Results: The overall wound infection rate was 12%, (12.2% for males and 11.1% for females). Wound disruption was necessitating laying the whole wound open and management as open technique. The overall wound disruption rate was 6%, (6.1% for males and 5.5% for females) and the overall recurrence rate was 7%. Conclusion: Our simplified off-midline closure without flap appeared to be comparable to conventional off-midline closure with flap, in terms of wound infection, wound dehiscence, and recurrence. Advantages of the simplified procedure include potentially reduced surgery complexity, reduced surgery time, and improved cosmetic outcome. PMID:24926445

  12. Shear viscosity in monatomic liquids: a simple mode-coupling approach

    NASA Astrophysics Data System (ADS)

    Balucani, Umberto

    The value of the shear-viscosity coefficient in fluids is controlled by the dynamical processes affecting the time decay of the associated Green-Kubo integrand, the stress autocorrelation function (SACF). These processes are investigated in monatomic liquids by means of a microscopic approach with a minimum use of phenomenological assumptions. In particular, mode-coupling effects (responsible for the presence in the SACF of a long-lasting 'tail') are accounted for by a simplified approach where the only requirement is knowledge of the structural properties. The theory readily yields quantitative predictions in its domain of validity, which comprises ordinary and moderately supercooled 'simple' liquids. The framework is applied to liquid Ar and Rb near their melting points, and quite satisfactory agreement with the simulation data is found for both the details of the SACF and the value of the shear-viscosity coefficient.

  13. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  14. Deflection Shape Reconstructions of a Rotating Five-blade Helicopter Rotor from TLDV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fioretti, A.; Castellini, P.; Tomasini, E. P.

    2010-05-28

    Helicopters are aircraft machines which are subjected to high level of vibrations, mainly due to spinning rotors. These are made of two or more blades attached by hinges to a central hub, which can make the dynamic behaviour difficult to study. However, they share some common dynamic properties with the ones expected in bladed discs, thereby the analytical modelling of rotors can be performed using some assumptions as the ones adopted for the bladed discs. This paper presents results of a vibrations study performed on a scaled helicopter rotor model which was rotating at a fix rotational speed and excitedmore » by an air jet. A simplified analytical model of that rotor was also produced to help the identifications of the vibration patterns measured using a single point tracking-SLDV measurement method.« less

  15. Heat transfer evaluation in a plasma core reactor

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Smith, T. M.; Stoenescu, M. L.

    1976-01-01

    Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.

  16. Numeric calculation of unsteady forces over thin pointed wings in sonic flow

    NASA Technical Reports Server (NTRS)

    Kimble, K. R.; Wu, J. M.

    1975-01-01

    A fast and reasonably accurate numerical procedure is proposed for the solution of a simplified unsteady transonic equation. The approach described takes into account many of the effects of the steady flow field. The resulting accuracy is within a few per cent and can be carried out on a computer in less than one minute per case (one frequency and one mode of oscillation). The problem concerns a rigid pointed wing which performs harmonic pitching oscillations of small amplitude in a steady uniform transonic flow. Wake influence is ignored and shocks must be weak. It is shown that the method is more flexible than the transonic box method proposed by Rodemich and Andrew (1965) in that it can easily account for variable local Mach number and rather arbitrary planform so long as the basic assumptions are fulfilled.

  17. Experimental Investigation of Wind-Tunnel Interference on the Downwash Behind an Airfoil

    NASA Technical Reports Server (NTRS)

    Silverstein, Abe; Katzoff, S

    1937-01-01

    The interference of the wind-tunnel boundaries on the downwash behind an airfoil has been experimentally investigated and the results have been compared with the available theoretical results for open-throat wind tunnels. As in previous studies, the simplified theoretical treatment that assumes the test section to be an infinite free jet has been shown to be satisfactory at the lifting line. The experimental results, however, show that this assumption may lead to erroneous conclusions regarding the corrections to be applied to the downwash in the region behind the airfoil where the tail surfaces are normally located. The results of a theory based on the more accurate concept of the open-jet wind tunnel as a finite length of free jet provided with a closed exit passage are in good qualitative agreement with the experimental results.

  18. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  19. Sneaky light stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifert, Till; Nachman, Benjamin

    2015-02-20

    A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less

  20. Sneaky light stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifert, Till; Nachman, Benjamin

    2015-04-01

    A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less

  1. Modeling synchronous voltage source converters in transmission system planning studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosterev, D.N.

    1997-04-01

    A Voltage Source Converter (VSC) can be beneficial to power utilities in many ways. To evaluate the VSC performance in potential applications, the device has to be represented appropriately in planning studies. This paper addresses VSC modeling for EMTP, powerflow, and transient stability studies. First, the VSC operating principles are overviewed, and the device model for EMTP studies is presented. The ratings of VSC components are discussed, and the device operating characteristics are derived based on these ratings. A powerflow model is presented and various control modes are proposed. A detailed stability model is developed, and its step-by-step initialization proceduremore » is described. A simplified stability model is also derived under stated assumptions. Finally, validation studies are performed to demonstrate performance of developed stability models and to compare it with EMTP simulations.« less

  2. Prediction of interior noise due to random acoustic or turbulent boundary layer excitation using statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.

    1990-01-01

    The feasibility of predicting interior noise due to random acoustic or turbulent boundary layer excitation was investigated in experiments in which a statistical energy analysis model (VAPEPS) was used to analyze measurements of the acceleration response and sound transmission of flat aluminum, lucite, and graphite/epoxy plates exposed to random acoustic or turbulent boundary layer excitation. The noise reduction of the plate, when backed by a shallow cavity and excited by a turbulent boundary layer, was predicted using a simplified theory based on the assumption of adiabatic compression of the fluid in the cavity. The predicted plate acceleration response was used as input in the noise reduction prediction. Reasonable agreement was found between the predictions and the measured noise reduction in the frequency range 315-1000 Hz.

  3. Reducing junk radiation and eccentricity in binary-black-hole initial data

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Pfeiffer, Harald; Brown, Duncan; Lindblom, Lee; Scheel, Mark; Kidder, Lawrence

    2007-04-01

    Numerical simulations of binary-black-hole (BBH) collisions require initial data that satisfy the Einstein constraint equations. Several well-known methods generate constraint-satisfying BBH data, but the commonly-used simplifying assumptions lead to undesirable effects. BBH data typically assume a conformally flat spatial metric; this leads to an initial pulse of unphysical ``junk'' gravitational radiation. Also, the initial radial velocity of the holes is often neglected; this can lead to significant eccentricity in the holes' trajectories. This talk will discuss efforts to reduce these effects by constructing and evolving generalizations of the BBH initial data of Cook and Pfeiffer (2004). By giving the holes a small radial velocity, the eccentricity can be greatly reduced (although the emitted waves are largely unaffected). The junk radiation for flat and non-flat conformal metrics will also be compared.

  4. Improved Temperature Dynamic Model of Turbine Subcomponents for Facilitation of Generalized Tip Clearance Control

    NASA Technical Reports Server (NTRS)

    Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo

    2004-01-01

    This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.

  5. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less

  6. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  7. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  8. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  9. The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere

    NASA Astrophysics Data System (ADS)

    Chen, X.; Lin, S. J.; Harris, L.

    2017-12-01

    Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.

  10. Camera System MTF: combining optic with detector

    NASA Astrophysics Data System (ADS)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  11. simplified aerosol representations in global modeling

    NASA Astrophysics Data System (ADS)

    Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip

    2015-04-01

    The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.

  12. Volume sharing of reservoir water

    NASA Astrophysics Data System (ADS)

    Dudley, Norman J.

    1988-05-01

    Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.

  13. Study on individual stochastic model of GNSS observations for precise kinematic applications

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik; Szpunar, Ryszard

    2015-04-01

    The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.

  14. Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis

    NASA Astrophysics Data System (ADS)

    JEONG, TAESEOK; SINGH, RAJENDRA

    2000-06-01

    This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.

  15. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    PubMed

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  17. Effects of finite pulse width on two-dimensional Fourier transform electron spin resonance.

    PubMed

    Liang, Zhichun; Crepeau, Richard H; Freed, Jack H

    2005-12-01

    Two-dimensional (2D) Fourier transform ESR techniques, such as 2D-ELDOR, have considerably improved the resolution of ESR in studies of molecular dynamics in complex fluids such as liquid crystals and membrane vesicles and in spin labeled polymers and peptides. A well-developed theory based on the stochastic Liouville equation (SLE) has been successfully employed to analyze these experiments. However, one fundamental assumption has been utilized to simplify the complex analysis, viz. the pulses have been treated as ideal non-selective ones, which therefore provide uniform irradiation of the whole spectrum. In actual experiments, the pulses are of finite width causing deviations from the theoretical predictions, a problem that is exacerbated by experiments performed at higher frequencies. In the present paper we provide a method to deal with the full SLE including the explicit role of the molecular dynamics, the spin Hamiltonian and the radiation field during the pulse. The computations are rendered more manageable by utilizing the Trotter formula, which is adapted to handle this SLE in what we call a "Split Super-Operator" method. Examples are given for different motional regimes, which show how 2D-ELDOR spectra are affected by the finite pulse widths. The theory shows good agreement with 2D-ELDOR experiments performed as a function of pulse width.

  18. Recommended Isolated-Line Profile for Representing High-Resolution Spectroscoscopic Transitions

    NASA Astrophysics Data System (ADS)

    Tennyson, J.; Bernath, P. F.; Campargue, A.; Császár, A. G.; Daumont, L.; Gamache, R. R.; Hodges, J. T.; Lisak, D.; Naumenko, O. V.; Rothman, L. S.; Tran, H.; Hartmann, J.-M.; Zobov, N. F.; Buldyreva, J.; Boone, C. D.; De Vizia, M. Domenica; Gianfrani, L.; McPheat, R.; Weidmann, D.; Murray, J.; Ngo, N. H.; Polyansky, O. L.

    2014-06-01

    Recommendations of an IUPAC Task Group, formed in 2011 on "Intensities and line shapes in high-resolution spectra of water isotopologues from experiment and theory" (Project No. 2011-022-2-100), on line profiles of isolated high-resolution rotational-vibrational transitions perturbed by neutral gas-phase molecules are presented. The well-documented inadequacies of the Voigt profile, used almost universally by databases and radiative-transfer codes to represent pressure effects and Doppler broadening in isolated vibrational-rotational and pure rotational transitions of the water molecule, have resulted in the development of a variety of alternative line profile models. These models capture more of the physics of the influence of pressure on line shapes but, in general, at the price of greater complexity. The Task Group recommends that the partially-Correlated quadratic-Speed-Dependent Hard-Collision profile should be adopted as the appropriate model for high-resolution spectroscopy. For simplicity this should be called the Hartmann-Tran profile (HTP). This profile is sophisticated enough to capture the various collisional contributions to the isolated line shape, can be computed in a straightforward and rapid manner, and reduces to simpler profiles, including the Voigt profile, under certain simplifying assumptions. For further details see: J. Tennyson et al, Pure Appl. Chem., 2014, in press.

  19. Possibility-induced simplified neutrosophic aggregation operators and their application to multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Liu, Peide

    2017-07-01

    Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.

  20. 24 CFR 92.252 - Qualification as affordable housing: Rental housing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... include average occupancy per unit and adjusted income assumptions. (b) Additional Rent limitations. In... provides the HOME rent limits which include average occupancy per unit and adjusted income assumptions... occupied only by households that are eligible as low-income families and must meet the following...

  1. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  2. POLARIZED LINE FORMATION WITH LOWER-LEVEL POLARIZATION AND PARTIAL FREQUENCY REDISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supriya, H. D.; Sampoorna, M.; Nagendra, K. N.

    2016-09-10

    In the well-established theories of polarized line formation with partial frequency redistribution (PRD) for a two-level and two-term atom, it is generally assumed that the lower level of the scattering transition is unpolarized. However, the existence of unexplained spectral features in some lines of the Second Solar Spectrum points toward a need to relax this assumption. There exists a density matrix theory that accounts for the polarization of all the atomic levels, but it is based on the flat-spectrum approximation (corresponding to complete frequency redistribution). In the present paper we propose a numerical algorithm to solve the problem of polarizedmore » line formation in magnetized media, which includes both the effects of PRD and the lower level polarization (LLP) for a two-level atom. First we derive a collisionless redistribution matrix that includes the combined effects of the PRD and the LLP. We then solve the relevant transfer equation using a two-stage approach. For illustration purposes, we consider two case studies in the non-magnetic regime, namely, the J {sub a} = 1, J {sub b} = 0 and J {sub a} = J {sub b} = 1, where J {sub a} and J {sub b} represent the total angular momentum quantum numbers of the lower and upper states, respectively. Our studies show that the effects of LLP are significant only in the line core. This leads us to propose a simplified numerical approach to solve the concerned radiative transfer problem.« less

  3. Development of Curved-Plate Elements for the Exact Buckling Analysis of Composite Plate Assemblies Including Transverse-Shear Effects

    NASA Technical Reports Server (NTRS)

    McGowan, David Michael

    1997-01-01

    The analytical formulation of curved-plate non-linear equilibrium equations including transverse-shear-deformation effects is presented. The formulation uses the principle of virtual work. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Linearized, perturbed equilibrium equations (stability equations) that describe the response of the plate just after buckling occurs are then derived after the application of several simplifying assumptions. These equations are then modified to allow the reference surface of the plate to be located at a distance z(sub c) from the centroidal surface. The implementation of the new theory into the VICONOPT exact buckling and vibration analysis and optimum design computer program is described as well. The terms of the plate stiffness matrix using both Classical Plate Theory (CPT) and first-order Shear-Deformation Plate Theory (SDPT) are presented. The necessary steps to include the effects of in-plane transverse and in-plane shear loads in the in-plane stability equations are also outlined. Numerical results are presented using the newly implemented capability. Comparisons of results for several example problems with different loading states are made. Comparisons of analyses using both physical and tensorial strain measures as well as CPT and SDPF are also made. Results comparing the computational effort required by the new analysis to that of the analysis currently in the VICONOPT program are presented. The effects of including terms related to in-plane transverse and in-plane shear loadings in the in-plane stability equations are also examined. Finally, results of a design-optimization study of two different cylindrical shells subject to uniform axial compression are presented.

  4. Multilocus Phylogeography and Species Delimitation in the Cumberland Plateau Salamander, Plethodon kentucki: Incongruence among Data Sets and Methods

    PubMed Central

    Kuchta, Shawn R.; Brown, Ashley D.; Converse, Paul E.; Highton, Richard

    2016-01-01

    Species are a fundamental unit of biodiversity, yet can be challenging to delimit objectively. This is particularly true of species complexes characterized by high levels of population genetic structure, hybridization between genetic groups, isolation by distance, and limited phenotypic variation. Previous work on the Cumberland Plateau Salamander, Plethodon kentucki, suggested that it might constitute a species complex despite occupying a relatively small geographic range. To examine this hypothesis, we sampled 135 individuals from 43 populations, and used four mitochondrial loci and five nuclear loci (5693 base pairs) to quantify phylogeographic structure and probe for cryptic species diversity. Rates of evolution for each locus were inferred using the multidistribute package, and time calibrated gene trees and species trees were inferred using BEAST 2 and *BEAST 2, respectively. Because the parameter space relevant for species delimitation is large and complex, and all methods make simplifying assumptions that may lead them to fail, we conducted an array of analyses. Our assumption was that strongly supported species would be congruent across methods. Putative species were first delimited using a Bayesian implementation of the GMYC model (bGMYC), Geneland, and Brownie. We then validated these species using the genealogical sorting index and BPP. We found substantial phylogeographic diversity using mtDNA, including four divergent clades and an inferred common ancestor at 14.9 myr (95% HPD: 10.8–19.7 myr). By contrast, this diversity was not corroborated by nuclear sequence data, which exhibited low levels of variation and weak phylogeographic structure. Species trees estimated a far younger root than did the mtDNA data, closer to 1.0 myr old. Mutually exclusive putative species were identified by the different approaches. Possible causes of data set discordance, and the problem of species delimitation in complexes with high levels of population structure and introgressive hybridization, are discussed. PMID:26974148

  5. The effect of a twin tunnel on the propagation of ground-borne vibration from an underground railway

    NASA Astrophysics Data System (ADS)

    Kuo, K. A.; Hunt, H. E. M.; Hussein, M. F. M.

    2011-12-01

    Accurate predictions of ground-borne vibration levels in the vicinity of an underground railway are greatly sought after in modern urban centres. Yet the complexity involved in simulating the underground environment means that it is necessary to make simplifying assumptions about this system. One such commonly made assumption is to ignore the effects of neighbouring tunnels, despite the fact that many underground railway lines consist of twin-bored tunnels, one for the outbound direction and one for the inbound direction. This paper presents a unique model for two tunnels embedded in a homogeneous, elastic fullspace. Each of these tunnels is subject to both known, dynamic train forces and dynamic cavity forces. The net forces acting on the tunnels are written as the sum of those tractions acting on the invert of a single tunnel, and those tractions that represent the motion induced by the neighbouring tunnel. By apportioning the tractions in this way, the vibration response of a two-tunnel system is written as a linear combination of displacement fields produced by a single-tunnel system. Using Fourier decomposition, forces are partitioned into symmetric and antisymmetric modenumber components to minimise computation times. The significance of the interactions between two tunnels is quantified by calculating the insertion gains, in both the vertical and horizontal directions, that result from the existence of a second tunnel. The insertion-gain results are shown to be localised and highly dependent on frequency, tunnel orientation and tunnel thickness. At some locations, the magnitude of these insertion gains is greater than 20 dB. This demonstrates that a high degree of inaccuracy exists in any surface vibration prediction model that includes only one of the two tunnels. This novel two-tunnel solution represents a significant contribution to the existing body of research into vibration from underground railways, as it shows that the second tunnel has a significant influence on the accuracy of vibration predictions for underground railways.

  6. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to determine the contact patch dimensions.

  7. An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.

    2016-01-01

    The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using the analysis from the Mars Science Laboratory's terminal descent sensing model. Alternate uses will also be shown for determining horizon maps and orbiter set times.

  8. Climate Model Ensemble Methodology: Rationale and Challenges

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.; Myrvold, W.

    2012-12-01

    A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.

  9. Numerical analysis of one-dimensional temperature data for groundwater/surface-water exchange with 1DTempPro

    NASA Astrophysics Data System (ADS)

    Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.

    2012-12-01

    Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.

  10. Design data needs modular high-temperature gas-cooled reactor. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1987-03-01

    The Design Data Needs (DDNs) provide summary statements for program management, of the designer`s need for experimental data to confirm or validate assumptions made in the design. These assumptions were developed using the Integrated Approach and are tabulated in the Functional Analysis Report. These assumptions were also necessary in the analyses or trade studies (A/TS) to develop selections of hardware design or design requirements. Each DDN includes statements providing traceability to the function and the associated assumption that requires the need.

  11. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  12. Intergenerational resource transfers with random offspring numbers

    PubMed Central

    Arrow, Kenneth J.; Levin, Simon A.

    2009-01-01

    A problem common to biology and economics is the transfer of resources from parents to children. We consider the issue under the assumption that the number of offspring is unknown and can be represented as a random variable. There are 3 basic assumptions. The first assumption is that a given body of resources can be divided into consumption (yielding satisfaction) and transfer to children. The second assumption is that the parents' welfare includes a concern for the welfare of their children; this is recursive in the sense that the children's welfares include concern for their children and so forth. However, the welfare of a child from a given consumption is counted somewhat differently (generally less) than that of the parent (the welfare of a child is “discounted”). The third assumption is that resources transferred may grow (or decline). In economic language, investment, including that in education or nutrition, is productive. Under suitable restrictions, precise formulas for the resulting allocation of resources are found, demonstrating that, depending on the shape of the utility curve, uncertainty regarding the number of offspring may or may not favor increased consumption. The results imply that wealth (stock of resources) will ultimately have a log-normal distribution. PMID:19617553

  13. The influence of a wind tunnel on helicopter rotational noise: Formulation of analysis

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    An analytical model is discussed that can be used to examine the effects of wind tunnel walls on helicopter rotational noise. A complete physical model of an acoustic source in a wind tunnel is described and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. The simplified physical model is then modeled as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. Details of generating a suitable Green's function and integral equation are included and the equation is discussed and also given for a two-dimensional case.

  14. Fully Bayesian tests of neutrality using genealogical summary statistics.

    PubMed

    Drummond, Alexei J; Suchard, Marc A

    2008-10-31

    Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  15. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  16. A Hydrodynamic Model of Alfvénic Wave Heating in a Coronal Loop and Its Chromospheric Footpoints

    NASA Astrophysics Data System (ADS)

    Reep, Jeffrey W.; Russell, Alexander J. B.; Tarr, Lucas A.; Leake, James E.

    2018-02-01

    Alfvénic waves have been proposed as an important energy transport mechanism in coronal loops, capable of delivering energy to both the corona and chromosphere and giving rise to many observed features of flaring and quiescent regions. In previous work, we established that resistive dissipation of waves (ambipolar diffusion) can drive strong chromospheric heating and evaporation, capable of producing flaring signatures. However, that model was based on a simplified assumption that the waves propagate instantly to the chromosphere, an assumption that the current work removes. Via a ray-tracing method, we have implemented traveling waves in a field-aligned hydrodynamic simulation that dissipate locally as they propagate along the field line. We compare this method to and validate against the magnetohydrodynamics code Lare3D. We then examine the importance of travel times to the dynamics of the loop evolution, finding that (1) the ionization level of the plasma plays a critical role in determining the location and rate at which waves dissipate; (2) long duration waves effectively bore a hole into the chromosphere, allowing subsequent waves to penetrate deeper than previously expected, unlike an electron beam whose energy deposition rises in height as evaporation reduces the mean-free paths of the electrons; and (3) the dissipation of these waves drives a pressure front that propagates to deeper depths, unlike energy deposition by an electron beam.

  17. Separating intrinsic from extrinsic fluctuations in dynamic biological systems

    PubMed Central

    Paulsson, Johan

    2011-01-01

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172

  18. Separating intrinsic from extrinsic fluctuations in dynamic biological systems.

    PubMed

    Hilfinger, Andreas; Paulsson, Johan

    2011-07-19

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.

  19. Calculation of Disease Dynamics in a Population of Households

    PubMed Central

    Ross, Joshua V.; House, Thomas; Keeling, Matt J.

    2010-01-01

    Early mathematical representations of infectious disease dynamics assumed a single, large, homogeneously mixing population. Over the past decade there has been growing interest in models consisting of multiple smaller subpopulations (households, workplaces, schools, communities), with the natural assumption of strong homogeneous mixing within each subpopulation, and weaker transmission between subpopulations. Here we consider a model of SIRS (susceptible-infectious-recovered-susceptible) infection dynamics in a very large (assumed infinite) population of households, with the simplifying assumption that each household is of the same size (although all methods may be extended to a population with a heterogeneous distribution of household sizes). For this households model we present efficient methods for studying several quantities of epidemiological interest: (i) the threshold for invasion; (ii) the early growth rate; (iii) the household offspring distribution; (iv) the endemic prevalence of infection; and (v) the transient dynamics of the process. We utilize these methods to explore a wide region of parameter space appropriate for human infectious diseases. We then extend these results to consider the effects of more realistic gamma-distributed infectious periods. We discuss how all these results differ from standard homogeneous-mixing models and assess the implications for the invasion, transmission and persistence of infection. The computational efficiency of the methodology presented here will hopefully aid in the parameterisation of structured models and in the evaluation of appropriate responses for future disease outbreaks. PMID:20305791

  20. Prototyping and validating requirements of radiation and nuclear emergency plan simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamid, AHA., E-mail: amyhamijah@nm.gov.my; Faculty of Computing, Universiti Teknologi Malaysia; Rozan, MZA.

    2015-04-29

    Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation wasmore » carried on to endorse the correctness of the model itself against the stakeholder’s intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties’ absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.« less

Top