Sample records for simplifying assumptions required

  1. On the coupling of fluid dynamics and electromagnetism at the top of the earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1985-01-01

    A kinematic approach to short-term geomagnetism has recently been based upon pre-Maxwell frozen-flux electromagnetism. A complete dynamic theory requires coupling fluid dynamics to electromagnetism. A geophysically plausible simplifying assumption for the vertical vorticity balance, namely that the vertical Lorentz torque is negligible, is introduced and its consequences are developed. The simplified coupled magnetohydrodynamic system is shown to conserve a variety of magnetic and vorticity flux integrals. These provide constraints on eligible models for the geomagnetic main field, its secular variation, and the horizontal fluid motions at the top of the core, and so permit a number of tests of the underlying assumptions.

  2. Preliminary methodology to assess the national and regional impact of U.S. wind energy development on birds and bats

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2015-01-01

    Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.

  3. Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics

    DTIC Science & Technology

    2012-09-13

    Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an

  4. An approach to quantifying the efficiency of a Bayesian filter

    USDA-ARS?s Scientific Manuscript database

    Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...

  5. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    NASA Astrophysics Data System (ADS)

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  6. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  7. 26 CFR 1.417(a)(3)-1 - Required explanation of qualified joint and survivor annuity and qualified preretirement survivor...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... grouping rules of paragraph (c)(2)(iii) of this section. Separate charts are provided for ages 55, 60, and...) Simplified presentations permitted—(A) Grouping of certain optional forms. Two or more optional forms of... starting date, a reasonable assumption for the age of the participant's spouse, or, in the case of a...

  8. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  9. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  10. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  11. Quantum State Tomography via Reduced Density Matrices.

    PubMed

    Xin, Tao; Lu, Dawei; Klassen, Joel; Yu, Nengkun; Ji, Zhengfeng; Chen, Jianxin; Ma, Xian; Long, Guilu; Zeng, Bei; Laflamme, Raymond

    2017-01-13

    Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However, it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work, we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally, we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results demonstrate the advantages and possible pitfalls of quantum state tomography with local measurements.

  12. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  13. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  14. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  15. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  16. Perfect gas effects in compressible rapid distortion theory

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.; Myers, M. R.

    1987-01-01

    The governing equations presented for small amplitude unsteady disturbances imposed on steady, compressible mean flows that are two-dimensional and nearly uniform have their basis in the perfect gas equations of state, and therefore generalize previous results based on tangent gas theory. While these equations are more complex, this complexity is required for adequate treatment of high frequency disturbances, especially when the base flow Mach number is large; under such circumstances, the simplifying assumptions of tangent gas theory are not applicable.

  17. Operationally efficient propulsion system study (OEPSS) data book. Volume 10; Air Augmented Rocket Afterburning

    NASA Technical Reports Server (NTRS)

    Farhangi, Shahram; Trent, Donnie (Editor)

    1992-01-01

    A study was directed towards assessing viability and effectiveness of an air augmented ejector/rocket. Successful thrust augmentation could potentially reduce a multi-stage vehicle to a single stage-to-orbit vehicle (SSTO) and, thereby, eliminate the associated ground support facility infrastructure and ground processing required by the eliminated stage. The results of this preliminary study indicate that an air augmented ejector/rocket propulsion system is viable. However, uncertainties resulting from simplified approach and assumptions must be resolved by further investigations.

  18. Analyses of School Commuting Data for Exposure Modeling Purposes

    EPA Science Inventory

    Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...

  19. Investigations in a Simplified Bracketed Grid Approach to Metrical Structure

    ERIC Educational Resources Information Center

    Liu, Patrick Pei

    2010-01-01

    In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…

  20. Stirling Engine External Heat System Design with Heat Pipe Heater.

    DTIC Science & Technology

    1986-07-01

    Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the

  1. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  3. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  4. Pendulum Motion and Differential Equations

    ERIC Educational Resources Information Center

    Reid, Thomas F.; King, Stephen C.

    2009-01-01

    A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…

  5. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    PubMed Central

    Morel, Yann G.; Favoretto, Fabio

    2017-01-01

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028

  6. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    PubMed

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  7. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  8. From puddles to planet: modeling approaches to vector-borne diseases at varying resolution and scale.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L

    2015-08-01

    Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  10. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  11. Some Basic Aspects of Magnetohydrodynamic Boundary-Layer Flows

    NASA Technical Reports Server (NTRS)

    Hess, Robert V.

    1959-01-01

    An appraisal is made of existing solutions of magnetohydrodynamic boundary-layer equations for stagnation flow and flat-plate flow, and some new solutions are given. Since an exact solution of the equations of magnetohydrodynamics requires complicated simultaneous treatment of the equations of fluid flow and of electromagnetism, certain simplifying assumptions are generally introduced. The full implications of these assumptions have not been brought out properly in several recent papers. It is shown in the present report that for the particular law of deformation which the magnetic lines are assumed to follow in these papers a magnet situated inside the missile nose would not be able to take up any drag forces; to do so it would have to be placed in the flow away from the nose. It is also shown that for the assumption that potential flow is maintained outside the boundary layer, the deformation of the magnetic lines is restricted to small values. The literature contains serious disagreements with regard to reductions in heat-transfer rates due to magnetic action at the nose of a missile, and these disagreements are shown to be mainly due to different interpretations of reentry conditions rather than more complicated effects. In the present paper the magnetohydrodynamic boundary-layer equation is also expressed in a simple form that is especially convenient for physical interpretation. This is done by adapting methods to magnetic forces which in the past have been used for forces due to gravitational or centrifugal action. The simplified approach is used to develop some new solutions of boundary-layer flow and to reinterpret certain solutions existing in the literature. An asymptotic boundary-layer solution representing a fixed velocity profile and shear is found. Special emphasis is put on estimating skin friction and heat-transfer rates.

  12. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  13. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.

  14. Polymer flammability

    DOT National Transportation Integrated Search

    2005-05-01

    This report provides an overview of polymer flammability from a material science perspective and describes currently accepted test methods to quantify burning behavior. Simplifying assumptions about the gas and condensed phase processes of flaming co...

  15. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  16. Ambient mass density effects on the International Space Station (ISS) microgravity experiments

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Smith, R. E.

    1996-01-01

    The Marshall engineering thermosphere model was specified by NASA to be used in the design, development and testing phases of the International Space Station (ISS). The mass density is the atmospheric parameter which most affects the ISS. Under simplifying assumptions, the critical ambient neutral density required to produce one micro-g on the ISS is estimated using an atmospheric drag acceleration equation. Examples are presented for the critical density versus altitude, and for the critical density that is exceeded at least once a month and once per orbit during periods of low and high solar activity. An analysis of the ISS orbital decay is presented.

  17. International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1

    DTIC Science & Technology

    1998-01-01

    pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made

  18. Prototyping and validating requirements of radiation and nuclear emergency plan simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamid, AHA., E-mail: amyhamijah@nm.gov.my; Faculty of Computing, Universiti Teknologi Malaysia; Rozan, MZA.

    2015-04-29

    Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation wasmore » carried on to endorse the correctness of the model itself against the stakeholder’s intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties’ absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.« less

  19. Prototyping and validating requirements of radiation and nuclear emergency plan simulator

    NASA Astrophysics Data System (ADS)

    Hamid, AHA.; Rozan, MZA.; Ibrahim, R.; Deris, S.; Selamat, A.

    2015-04-01

    Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation was carried on to endorse the correctness of the model itself against the stakeholder's intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties' absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.

  20. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  1. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.

    2015-01-01

    Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on several real-world problems. Two assumptions are frequently made to simplify the modeling of an interplanetary high-thrust trajectory during the preliminary design phase. The first assumption is that because the available thrust is high, any maneuvers performed by the spacecraft can be modeled as discrete changes in velocity. This assumption removes the need to integrate the equations of motion governing the motion of a spacecraft under thrust and allows the change in velocity to be modeled as an impulse and the expenditure of propellant to be modeled using the time-independent solution to Tsiolkovsky's rocket equation [1]. The second assumption is that the spacecraft moves primarily under the influence of the central body, i.e. the sun, and all other perturbing forces may be neglected in preliminary design. The path of the spacecraft may then be modeled as a series of conic sections. When a spacecraft performs a close approach to a planet, the central body switches from the sun to that planet and the trajectory is modeled as a hyperbola with respect to the planet. This is known as the method of patched conics. The impulsive and patched-conic assumptions significantly simplify the preliminary design problem.

  2. Longitudinal stability in relation to the use of an automatic pilot

    NASA Technical Reports Server (NTRS)

    Klemin, Alexander; Pepper, Perry A; Wittner, Howard A

    1938-01-01

    The effect of restraint in pitching introduced by an automatic pilot upon the longitudinal stability of an airplane has been studied. Customary simplifying assumptions have been made in setting down the equations of motion, and the results of computations based on the simplified equations are presented to show the effect of an automatic pilot installed in an airplane of known dimensions and characteristics. The equations developed have been applied by making calculations for a Clark biplane and a Fairchild 22 monoplane.

  3. Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres

    NASA Astrophysics Data System (ADS)

    Cuomo, M.; dell'Isola, F.; Greco, L.

    2016-06-01

    Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.

  4. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  5. Measuring Spatial Infiltration in Stormwater Control Measures: Results and Implications

    EPA Science Inventory

    This presentation will provide background information on research conducted by EPA-ORD on the use of soil moisture sensors in bioretention/bioinfiltration technologies to evaluate infiltration mechanisms and compares monitoring results to simplified modeling assumptions. A serie...

  6. Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling

    EPA Science Inventory

    Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...

  7. Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle

    NASA Technical Reports Server (NTRS)

    Ciepluch, Carl C.

    1960-01-01

    Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.

  8. CMG-Augmented Control of a Hovering VTOL Platform

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Moerder, D. D.

    2007-01-01

    This paper describes how Control Moment Gyroscopes (CMGs) can be used for stability augmentation to a thrust vectoring system for a generic Vertical Take-Off and Landing platform. The response characteristics of the platform which uses only thrust vectoring and a second configuration which includes a single-gimbal CMG array are simulated and compared for hovering flight while subject to severe air turbulence. Simulation results demonstrate the effectiveness of a CMG array in its ability to significantly reduce the agility requirement on the thrust vectoring system. Albeit simplifying physical assumptions on a generic CMG configuration, the numerical results also suggest that reasonably sized CMGs will likely be sufficient for a small hovering vehicle.

  9. Relating color working memory and color perception.

    PubMed

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A mathematics for medicine: The Network Effect

    PubMed Central

    West, Bruce J.

    2014-01-01

    The theory of medicine and its complement systems biology are intended to explain the workings of the large number of mutually interdependent complex physiologic networks in the human body and to apply that understanding to maintaining the functions for which nature designed them. Therefore, when what had originally been made as a simplifying assumption or a working hypothesis becomes foundational to understanding the operation of physiologic networks it is in the best interests of science to replace or at least update that assumption. The replacement process requires, among other things, an evaluation of how the new hypothesis affects modern day understanding of medical science. This paper identifies linear dynamics and Normal statistics as being such arcane assumptions and explores some implications of their retirement. Specifically we explore replacing Normal with fractal statistics and examine how the latter are related to non-linear dynamics and chaos theory. The observed ubiquity of inverse power laws in physiology entails the need for a new calculus, one that describes the dynamics of fractional phenomena and captures the fractal properties of the statistics of physiological time series. We identify these properties as a necessary consequence of the complexity resulting from the network dynamics and refer to them collectively as The Network Effect. PMID:25538622

  11. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  12. INTERNAL DOSE AND RESPONSE IN REAL-TIME.

    EPA Science Inventory

    Abstract: Rapid temporal fluctuations in exposure may occur in a number of situations such as accidents or other unexpected acute releases of airborne substances. Often risk assessments overlook temporal exposure patterns under simplifying assumptions such as the use of time-wei...

  13. Impact buckling of thin bars in the elastic range for any end condition

    NASA Technical Reports Server (NTRS)

    Taub, Josef

    1934-01-01

    Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.

  14. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  15. Simplifying the complexity of resistance heterogeneity in metastasis

    PubMed Central

    Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.

    2014-01-01

    The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979

  16. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  17. A Methodology for Developing Army Acquisition Strategies for an Uncertain Future

    DTIC Science & Technology

    2007-01-01

    manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are

  18. MODELING NITROGEN-CARBON CYCLING AND OXYGEN CONSUMPTION IN BOTTOM SEDIMENTS

    EPA Science Inventory

    A model framework is presented for simulating nitrogen and carbon cycling at the sediment–water interface, and predicting oxygen consumption by oxidation reactions inside the sediments. Based on conservation of mass and invoking simplifying assumptions, a coupled system of diffus...

  19. Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-10-01

    A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.

  20. Reviewed approach to defining the Active Interlock Envelope for Front End ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Shaftan, T.

    To protect the NSLS-II Storage Ring (SR) components from damage from synchrotron radiation produced by insertion devices (IDs) the Active Interlock (AI) keeps electron beam within some safe envelope (a.k.a Active Interlock Envelope or AIE) in the transverse phase space. The beamline Front Ends (FEs) are designed under assumption that above certain beam current (typically 2 mA) the ID synchrotron radiation (IDSR) fan is produced by the interlocked e-beam. These assumptions also define how the ray tracing for FE is done. To simplify the FE ray tracing for typical uncanted ID it was decided to provide the Mechanical Engineering groupmore » with a single set of numbers (x,x’,y,y’) for the AIE at the center of the long (or short) ID straight section. Such unified approach to the design of the beamline Front Ends will accelerate the design process and save valuable human resources. In this paper we describe our new approach to defining the AI envelope and provide the resulting numbers required for design of the typical Front End.« less

  1. DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS

    EPA Science Inventory

    Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Johannes; Merle, Alexander; Totzauer, Maximilian

    We investigate the early Universe production of sterile neutrino Dark Matter by the decays of singlet scalars. All previous studies applied simplifying assumptions and/or studied the process only on the level of number densities, which makes it impossible to give statements about cosmic structure formation. We overcome these issues by dropping all simplifying assumptions (except for one we showed earlier to work perfectly) and by computing the full course of Dark Matter production on the level of non-thermal momentum distribution functions. We are thus in the position to study a broad range of aspects of the resulting settings and applymore » a broad set of bounds in a reliable manner. We have a particular focus on how to incorporate bounds from structure formation on the level of the linear power spectrum, since the simplistic estimate using the free-streaming horizon clearly fails for highly non-thermal distributions. Our work comprises the most detailed and comprehensive study of sterile neutrino Dark Matter production by scalar decays presented so far.« less

  3. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  4. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  5. Dynamic behaviour of thin composite plates for different boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com

    2014-12-10

    In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less

  6. A simplified rotor system mathematical model for piloted flight dynamics simulation

    NASA Technical Reports Server (NTRS)

    Chen, R. T. N.

    1979-01-01

    The model was developed for real-time pilot-in-the-loop investigation of helicopter flying qualities. The mathematical model included the tip-path plane dynamics and several primary rotor design parameters, such as flapping hinge restraint, flapping hinge offset, blade Lock number, and pitch-flap coupling. The model was used in several exploratory studies of the flying qualities of helicopters with a variety of rotor systems. The basic assumptions used and the major steps involved in the development of the set of equations listed are described. The equations consisted of the tip-path plane dynamic equation, the equations for the main rotor forces and moments, and the equation for control phasing required to achieve decoupling in pitch and roll due to cyclic inputs.

  7. Towards a theory of tiered testing.

    PubMed

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  8. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  9. Integrodifferential formulations of the continuous-time random walk for solute transport subject to bimolecular A +B →0 reactions: From micro- to mesoscopic

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Berkowitz, Brian

    2015-03-01

    We develop continuous-time random walk (CTRW) equations governing the transport of two species that annihilate when in proximity to one another. In comparison with catalytic or spontaneous transformation reactions that have been previously considered in concert with CTRW, both species have spatially variant concentrations that require consideration. We develop two distinct formulations. The first treats transport and reaction microscopically, potentially capturing behavior at sharp fronts, but at the cost of being strongly nonlinear. The second, mesoscopic, formulation relies on a separation-of-scales technique we develop to separate microscopic-scale reaction and upscaled transport. This simplifies the governing equations and allows treatment of more general reaction dynamics, but requires stronger smoothness assumptions of the solution. The mesoscopic formulation is easily tractable using an existing solution from the literature (we also provide an alternative derivation), and the generalized master equation (GME) for particles undergoing A +B →0 reactions is presented. We show that this GME simplifies, under appropriate circumstances, to both the GME for the unreactive CTRW and to the advection-dispersion-reaction equation. An additional major contribution of this work is on the numerical side: to corroborate our development, we develop an indirect particle-tracking-partial-integro-differential-equation (PIDE) hybrid verification technique which could be applicable widely in reactive anomalous transport. Numerical simulations support the mesoscopic analysis.

  10. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  11. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1988-01-01

    The previously constructed one dimensional model for the simulated operation of an iodine laser assumed that the perfluoroalkyl iodide gas n-C3F7I was incompressible. The present study removes this simplifying assumption and considers n-C3F7I as a compressible fluid.

  12. A simplified analytical solution for thermal response of a one-dimensional, steady state transpiration cooling system in radiative and convective environment

    NASA Technical Reports Server (NTRS)

    Kubota, H.

    1976-01-01

    A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.

  13. BASEFLOW SEPARATION BASED ON ANALYTICAL SOLUTIONS OF THE BOUSSINESQ EQUATION. (R824995)

    EPA Science Inventory

    Abstract

    A technique for baseflow separation is presented based on similarity solutions of the Boussinesq equation. The method makes use of the simplifying assumptions that a horizontal impermeable layer underlies a Dupuit aquifer which is drained by a fully penetratin...

  14. Quasi 3D modeling of water flow in vadose zone and groundwater

    USDA-ARS?s Scientific Manuscript database

    The complexity of subsurface flow systems calls for a variety of concepts leading to the multiplicity of simplified flow models. One habitual simplification is based on the assumption that lateral flow and transport in unsaturated zone are not significant unless the capillary fringe is involved. In ...

  15. The Role of Semantic Clustering in Optimal Memory Foraging

    ERIC Educational Resources Information Center

    Montez, Priscilla; Thompson, Graham; Kello, Christopher T.

    2015-01-01

    Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in…

  16. Scaling the Library Collection; A Simplified Method for Weighing the Variables

    ERIC Educational Resources Information Center

    Vagianos, Louis

    1973-01-01

    On the assumption that the physical properties of any information stock (book, etc.) offer the best foundation on which to develop satisfactory measurements for assessing library operations and developing library procedures, weight is suggested as the most useful variable for assessment and standardization. Advantages of this approach are…

  17. Dualisms in Higher Education: A Critique of Their Influence and Effect

    ERIC Educational Resources Information Center

    Macfarlane, Bruce

    2015-01-01

    Dualisms pervade the language of higher education research providing an over-simplified roadmap to the field. However, the lazy logic of their popular appeal supports the perpetuation of erroneous and often outdated assumptions about the nature of modern higher education. This paper explores nine commonly occurring dualisms:…

  18. A Comprehensive Real-World Distillation Experiment

    ERIC Educational Resources Information Center

    Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.

    2015-01-01

    Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…

  19. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  20. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  1. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, P.J.

    A forthcoming revision to the R6 Leak-before-Break Assessment Procedure is briefly described. Practical application of the LbB concepts to safety-critical nuclear plant is illustrated by examples covering both low temperature and high temperature (>450{degrees}C) operating regimes. The examples highlight a number of issues which can make the development of a satisfactory LbB case problematic: for example, coping with highly loaded components, methodology assumptions and the definition of margins, the effect of crack closure owing to weld residual stresses, complex thermal stress fields or primary bending fields, the treatment of locally high stresses at crack intersections with free surfaces, the choicemore » of local limit load solution when predicting ligament breakthrough, and the scope of calculations required to support even a simplified LbB case for high temperature steam pipe-work systems.« less

  3. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  4. Structure of thermal pair clouds around gamma-ray-emitting black holes

    NASA Technical Reports Server (NTRS)

    Liang, Edison P.

    1991-01-01

    Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.

  5. Dynamically rich, yet parameter-sparse models for spatial epidemiology. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Jusup, Marko; Iwami, Shingo; Podobnik, Boris; Stanley, H. Eugene

    2015-12-01

    Since the very inception of mathematical modeling in epidemiology, scientists exploited the simplicity ingrained in the assumption of a well-mixed population. For example, perhaps the earliest susceptible-infectious-recovered (SIR) model developed by L. Reed and W.H. Frost in the 1920s [1], included the well-mixed assumption such that any two individuals in the population could meet each other. The problem was that, unlike many other simplifying assumptions used in epidemiological modeling whose validity holds in one situation or the other, well-mixed populations are almost non-existent in reality because the nature of human socio-economic interactions is, for the most part, highly heterogeneous (e.g. [2-6]).

  6. Quick and Easy Rate Equations for Multistep Reactions

    ERIC Educational Resources Information Center

    Savage, Phillip E.

    2008-01-01

    Students rarely see closed-form analytical rate equations derived from underlying chemical mechanisms that contain more than a few steps unless restrictive simplifying assumptions (e.g., existence of a rate-determining step) are made. Yet, work published decades ago allows closed-form analytical rate equations to be written quickly and easily for…

  7. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  8. Solubility and Thermodynamics: An Introductory Experiment

    NASA Astrophysics Data System (ADS)

    Silberman, Robert G.

    1996-05-01

    This article describes a laboratory experiment suitable for high school or freshman chemistry students in which the solubility of potassium nitrate is determined at several different temperatures. The data collected is used to calculate the equilibrium constant, delta G, delta H, and delta S for dissolution reaction. The simplifying assumptions are noted in the article.

  9. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  10. The Signal Importance of Noise

    ERIC Educational Resources Information Center

    Macy, Michael; Tsvetkova, Milena

    2015-01-01

    Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…

  11. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  12. Distinguishing Identical Particles and the Correct Counting of States

    ERIC Educational Resources Information Center

    de la Torre, A. C.; Martin, H. O.

    2009-01-01

    It is shown that quantum systems of identical particles can be treated as different when they are in well-differentiated states. This simplifying assumption allows for the consideration of quantum systems isolated from the rest of the universe and justifies many intuitive statements about identical systems. However, it is shown that this…

  13. Creating Matched Samples Using Exact Matching. Statistical Report 2016-3

    ERIC Educational Resources Information Center

    Godfrey, Kelly E.

    2016-01-01

    By creating and analyzing matched samples, researchers can simplify their analyses to include fewer covariate variables, relying less on model assumptions, and thus generating results that may be easier to report and interpret. When two groups essentially "look" the same, it is easier to explore their differences and make comparisons…

  14. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  15. Compressive properties of passive skeletal muscle-the impact of precise sample geometry on parameter identification in inverse finite element analysis.

    PubMed

    Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias

    2012-10-11

    Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    NASA Astrophysics Data System (ADS)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.

  17. 28 CFR 30.12 - How may a state simplify, consolidate, or substitute federally required state plans?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... substitute federally required state plans? 30.12 Section 30.12 Judicial Administration DEPARTMENT OF JUSTICE INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF JUSTICE PROGRAMS AND ACTIVITIES § 30.12 How may a state simplify... with law, a state may decide to try to simplify, consolidate, or substitute federally required state...

  18. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  19. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  20. Development of an estimation model for the evaluation of the energy requirement of dilute acid pretreatments of biomass.

    PubMed

    Mafe, Oluwakemi A T; Davies, Scott M; Hancock, John; Du, Chenyu

    2015-01-01

    This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly.

  1. A method to assess the population-level consequences of wind energy facilities on bird and bat species: Chapter

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2016-01-01

    For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.

  2. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  3. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  4. Elaboration Preferences and Differences in Learning Proficiency.

    ERIC Educational Resources Information Center

    Rohwer, William D., Jr.; Levin, Joel R.

    The major emphasis of this study is on the comparative validities of paired-associate learning tests and IQ tests in predicting reading achievement. The study engages in a brief review of earlier research in order to examine the validity of two assumptions--that the construction and/or the use of a tactic that simplifies a learning task is one of…

  5. 76 FR 58268 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-20

    ... simplify some assumptions and to make estimation methods consistent; and characterization as Agency burden...-1007 to (1) EPA online using http://www.regulations.gov (our preferred method), by e-mail to oppt.ncic...-HQ-OPPT-2010-1007, which is available for online viewing at http://www.regulations.gov , or in person...

  6. Test Review: Watson, G., & Glaser, E. M. (2010), "Watson-Glaser™ II Critical Thinking Appraisal." Washington State University, Pullman, USA

    ERIC Educational Resources Information Center

    Sternod, Latisha; French, Brian

    2016-01-01

    The Watson-Glaser™ II Critical Thinking Appraisal (Watson-Glaser II; Watson & Glaser, 2010) is a revised version of the "Watson-Glaser Critical Thinking Appraisal®" (Watson & Glaser, 1994). The Watson-Glaser II introduces a simplified model of critical thinking, consisting of three subdimensions: recognize assumptions, evaluate…

  7. Selected mesostructure properties in loblolly pine from Arkansas plantations

    Treesearch

    David E. Kretschmann; Steven M. Cramer; Roderic Lakes; Troy Schmidt

    2006-01-01

    Design properties of wood are currently established at the macroscale, assuming wood to be a homogeneous orthotropic material. The resulting variability from the use of such a simplified assumption has been handled by designing with lower percentile values and applying a number of factors to account for the wide statistical variation in properties. With managed...

  8. Estimation of effective population size in continuously distributed populations: There goes the neighborhood

    Treesearch

    M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples

    2013-01-01

    Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...

  9. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  10. Direct vibro-elastography FEM inversion in Cartesian and cylindrical coordinate systems without the local homogeneity assumption

    NASA Astrophysics Data System (ADS)

    Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.

    2015-05-01

    To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.

  11. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  12. Shear viscosity in monatomic liquids: a simple mode-coupling approach

    NASA Astrophysics Data System (ADS)

    Balucani, Umberto

    The value of the shear-viscosity coefficient in fluids is controlled by the dynamical processes affecting the time decay of the associated Green-Kubo integrand, the stress autocorrelation function (SACF). These processes are investigated in monatomic liquids by means of a microscopic approach with a minimum use of phenomenological assumptions. In particular, mode-coupling effects (responsible for the presence in the SACF of a long-lasting 'tail') are accounted for by a simplified approach where the only requirement is knowledge of the structural properties. The theory readily yields quantitative predictions in its domain of validity, which comprises ordinary and moderately supercooled 'simple' liquids. The framework is applied to liquid Ar and Rb near their melting points, and quite satisfactory agreement with the simulation data is found for both the details of the SACF and the value of the shear-viscosity coefficient.

  13. Heat transfer evaluation in a plasma core reactor

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Smith, T. M.; Stoenescu, M. L.

    1976-01-01

    Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.

  14. Reducing junk radiation and eccentricity in binary-black-hole initial data

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Pfeiffer, Harald; Brown, Duncan; Lindblom, Lee; Scheel, Mark; Kidder, Lawrence

    2007-04-01

    Numerical simulations of binary-black-hole (BBH) collisions require initial data that satisfy the Einstein constraint equations. Several well-known methods generate constraint-satisfying BBH data, but the commonly-used simplifying assumptions lead to undesirable effects. BBH data typically assume a conformally flat spatial metric; this leads to an initial pulse of unphysical ``junk'' gravitational radiation. Also, the initial radial velocity of the holes is often neglected; this can lead to significant eccentricity in the holes' trajectories. This talk will discuss efforts to reduce these effects by constructing and evolving generalizations of the BBH initial data of Cook and Pfeiffer (2004). By giving the holes a small radial velocity, the eccentricity can be greatly reduced (although the emitted waves are largely unaffected). The junk radiation for flat and non-flat conformal metrics will also be compared.

  15. Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols

    PubMed Central

    Nam, Junghyun; Kim, Moonseong

    2014-01-01

    We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863

  16. Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.

  17. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  18. Stratosphere circulation on tidally locked ExoEarths

    NASA Astrophysics Data System (ADS)

    Carone, L.; Keppens, R.; Decin, L.; Henning, Th.

    2018-02-01

    Stratosphere circulation is important to interpret abundances of photochemically produced compounds like ozone which we aim to observe to assess habitability of exoplanets. We thus investigate a tidally locked ExoEarth scenario for TRAPPIST-1b, TRAPPIST-1d, Proxima Centauri b and GJ 667 C f with a simplified 3D atmosphere model and for different stratospheric wind breaking assumptions.

  19. A nonlinear theory for elastic plates with application to characterizing paper properties

    Treesearch

    M. W. Johnson; Thomas J. Urbanik

    1984-03-01

    A theory of thin plates which is physically as well as kinematically nonlinear is, developed and used to characterize elastic material behavior for arbitrary stretching and bending deformations. It is developed from a few clearly defined assumptions and uses a unique treatment of strain energy. An effective strain concept is introduced to simplify the theory to a...

  20. Sequential Auctions with Partially Substitutable Goods

    NASA Astrophysics Data System (ADS)

    Vetsikas, Ioannis A.; Jennings, Nicholas R.

    In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.

  1. Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals

    NASA Astrophysics Data System (ADS)

    Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.

    2016-11-01

    There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.

  2. A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.

    2014-01-01

    Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.

  3. Experimental and Numerical Analysis of Narrowband Coherent Rayleigh-Brillouin Scattering in Atomic and Molecular Species (Pre Print)

    DTIC Science & Technology

    2012-02-01

    use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties

  4. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  5. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  6. Simplified continuous simulation model for investigating effects of controlled drainage on long-term soil moisture dynamics with a shallow groundwater table.

    PubMed

    Sun, Huaiwei; Tong, Juxiu; Luo, Wenbing; Wang, Xiugui; Yang, Jinzhong

    2016-08-01

    Accurate modeling of soil water content is required for a reasonable prediction of crop yield and of agrochemical leaching in the field. However, complex mathematical models faced the difficult-to-calibrate parameters and the distinct knowledge between the developers and users. In this study, a deterministic model is presented and is used to investigate the effects of controlled drainage on soil moisture dynamics in a shallow groundwater area. This simplified one-dimensional model is formulated to simulate soil moisture in the field on a daily basis and takes into account only the vertical hydrological processes. A linear assumption is proposed and is used to calculate the capillary rise from the groundwater. The pipe drainage volume is calculated by using a steady-state approximation method and the leakage rate is calculated as a function of soil moisture. The model is successfully calibrated by using field experiment data from four different pipe drainage treatments with several field observations. The model was validated by comparing the simulations with observed soil water content during the experimental seasons. The comparison results demonstrated the robustness and effectiveness of the model in the prediction of average soil moisture values. The input data required to run the model are widely available and can be measured easily in the field. It is observed that controlled drainage results in lower groundwater contribution to the root zone and lower depth of percolation to the groundwater, thus helping in the maintenance of a low level of soil salinity in the root zone.

  7. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  8. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  9. Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations

    DOE PAGES

    Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo; ...

    2018-05-17

    Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less

  10. Hypotheses of calculation of the water flow rate evaporated in a wet cooling tower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourillot, C.

    1983-08-01

    The method developed by Poppe at the University of Hannover to calculate the thermal performance of a wet cooling tower fill is presented. The formulation of Poppe is then validated using full-scale test data from a wet cooling tower at the power station at Neurath, Federal Republic of Germany. It is shown that the Poppe method predicts the evaporated water flow rate almost perfectly and the condensate content of the warm air with good accuracy over a wide range of ambient conditions. The simplifying assumptions of the Merkel theory are discussed, and the errors linked to these assumptions are systematicallymore » described, then illustrated with the test data.« less

  11. Marginal Loss Calculations for the DCOPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldridge, Brent; O'Neill, Richard P.; Castillo, Andrea R.

    2016-12-05

    The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today,more » but will include sufficient detail to demonstrate a few points with regard to the handling of losses.« less

  12. A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces

    PubMed Central

    Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.

    2014-01-01

    A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895

  13. A Mass Tracking Formulation for Bubbles in Incompressible Flow

    DTIC Science & Technology

    2012-10-14

    incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow

  14. Simplifying Causal Complexity: How Interactions between Modes of Causal Induction and Information Availability Lead to Heuristic-Driven Reasoning

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Tutwiler, M. Shane

    2014-01-01

    This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…

  15. Flux Jacobian Matrices For Equilibrium Real Gases

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  16. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  17. How the continents deform: The evidence from tectonic geodesy

    USGS Publications Warehouse

    Thatcher, Wayne R.

    2009-01-01

    Space geodesy now provides quantitative maps of the surface velocity field within tectonically active regions, supplying constraints on the spatial distribution of deformation, the forces that drive it, and the brittle and ductile properties of continental lithosphere. Deformation is usefully described as relative motions among elastic blocks and is block-like because major faults are weaker than adjacent intact crust. Despite similarities, continental block kinematics differs from global plate tectonics: blocks are much smaller, typically ∼100–1000 km in size; departures from block rigidity are sometimes measurable; and blocks evolve over ∼1–10 Ma timescales, particularly near their often geometrically irregular boundaries. Quantitatively relating deformation to the forces that drive it requires simplifying assumptions about the strength distribution in the lithosphere. If brittle/elastic crust is strongest, interactions among blocks control the deformation. If ductile lithosphere is the stronger, its flow properties determine the surface deformation, and a continuum approach is preferable.

  18. Modelling of the luminescent properties of nanophosphor coatings with different porosity

    NASA Astrophysics Data System (ADS)

    Kubrin, R.; Graule, T.

    2016-10-01

    Coatings of Y2O3:Eu nanophosphor with the effective refractive index of 1.02 were obtained by flame aerosol deposition (FAD). High-pressure cold compaction decreased the layer porosity from 97.3 to 40 vol % and brought about dramatic changes in the photoluminescent performance. Modelling of interdependence between the quantum yield, decay time of luminescence, and porosity of the nanophosphor films required a few basic simplifying assumptions. We confirmed that the properties of porous nanostructured coatings are most appropriately described by the nanocrystal cavity model of the radiative decay. All known effective medium equations resulted in seemingly underestimated values of the effective refractive index. While the best fit was obtained with the linear permittivity mixing rule, the influence of further effects, previously not accounted for, could not be excluded. We discuss the peculiarities in optical response of nanophopshors and suggest the directions for future research.

  19. Gravitational Radiation of a Vibrating Physical String as a Model for the Gravitational Emission of an Astrophysical Plasma

    NASA Astrophysics Data System (ADS)

    Lewis, Ray A.; Modanese, Giovanni

    Vibrating media offer an important testing ground for reconciling conflicts between General Relativity, Quantum Mechanics and other branches of physics. For sources like a Weber bar, the standard covariant formalism for elastic bodies can be applied. The vibrating string, however, is a source of gravitational waves which requires novel computational techniques, based on the explicit construction of a conserved and renormalized energy-momentum tensor. Renormalization (in a classical sense) is necessary to take into account the effect of external constraints, which affect the emission considerably. Our computation also relaxes usual simplifying assumptions like far-field approximation, spherical or plane wave symmetry, TT gauge and absence of internal interference. In a further step towards unification, the method is then adapted to give the radiation field of a transversal Alfven wave in a rarefied astrophysical plasma, where the tension is produced by an external static magnetic field.

  20. Relationship between population dynamics and the self-energy in driven non-equilibrium systems

    DOE PAGES

    Kemper, Alexander F.; Freericks, James K.

    2016-05-13

    We compare the decay rates of excited populations directly calculated within a Keldysh formalism to the equation of motion of the population itself for a Hubbard-Holstein model in two dimensions. While it is true that these two approaches must give the same answer, it is common to make a number of simplifying assumptions, within the differential equation for the populations, that allows one to interpret the decay in terms of hot electrons interacting with a phonon bath. Furthermore, we show how care must be taken to ensure an accurate treatment of the equation of motion for the populations due tomore » the fact that there are identities that require cancellations of terms that naively look like they contribute to the decay rates. In particular, the average time dependence of the Green's functions and self-energies plays a pivotal role in determining these decay rates.« less

  1. Rates of species loss from Amazonian forest fragments

    PubMed Central

    Ferraz, Gonçalo; Russell, Gareth J.; Stouffer, Philip C.; Bierregaard, Richard O.; Pimm, Stuart L.; Lovejoy, Thomas E.

    2003-01-01

    In the face of worldwide habitat fragmentation, managers need to devise a time frame for action. We ask how fast do understory bird species disappear from experimentally isolated plots in the Biological Dynamics of Forest Fragments Project, central Amazon, Brazil. Our data consist of mist-net records obtained over a period of 13 years in 11 sites of 1, 10, and 100 hectares. The numbers of captures per species per unit time, analyzed under different simplifying assumptions, reveal a set of species-loss curves. From those declining numbers, we derive a scaling rule for the time it takes to lose half the species in a fragment as a function of its area. A 10-fold decrease in the rate of species loss requires a 1,000-fold increase in area. Fragments of 100 hectares lose one half of their species in <15 years, too short a time for implementing conservation measures. PMID:14614134

  2. A comparison of experimental and calculated thin-shell leading-edge buckling due to thermal stresses

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.

    1988-01-01

    High-temperature thin-shell leading-edge buckling test data are analyzed using NASA structural analysis (NASTRAN) as a finite element tool for predicting thermal buckling characteristics. Buckling points are predicted for several combinations of edge boundary conditions. The problem of relating the appropriate plate area to the edge stress distribution and the stress gradient is addressed in terms of analysis assumptions. Local plasticity was found to occur on the specimen analyzed, and this tended to simplify the basic problem since it effectively equalized the stress gradient from loaded edge to loaded edge. The initial loading was found to be difficult to select for the buckling analysis because of the transient nature of thermal stress. Multiple initial model loadings are likely required for complicated thermal stress time histories before a pertinent finite element buckling analysis can be achieved. The basic mode shapes determined from experimentation were correctly identified from computation.

  3. Random walk study of electron motion in helium in crossed electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1972-01-01

    Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.

  4. Structural Code Considerations for Solar Rooftop Installations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred

    2014-12-01

    Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less

  5. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  6. Quadrotor Control in the Presence of Unknown Mass Properties

    NASA Astrophysics Data System (ADS)

    Duivenvoorden, Rikky Ricardo Petrus Rufino

    Quadrotor UAVs are popular due to their mechanical simplicity, as well as their capability to hover and vertically take-off and land. As applications diversify, quadrotors are increasingly required to operate under unknown mass properties, for example as a multirole sensor platform or for package delivery operations. The work presented here consists of the derivation of a generalized quadrotor dynamic model without the typical simplifying assumptions on the first and second moments of mass. The maximum payload capacity of a quadrotor in hover, and the observability of the unknown mass properties are discussed. A brief introduction of L1 adaptive control is provided, and three different L 1 adaptive controllers were designed for the Parrot AR.Drone quadrotor. Their tracking and disturbance rejection performance was compared to the baseline nonlinear controller in experiments. Finally, the results of the combination of L1 adaptive control with iterative learning control are presented, showing high performance trajectory tracking under uncertainty.

  7. Method of Moments Applied to the Analysis of Precision Spectra from the Neutron Time-of- flight Diagnostics at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hatarik, Robert; Caggiano, J. A.; Callahan, D.; Casey, D.; Clark, D.; Doeppner, T.; Eckart, M.; Field, J.; Frenje, J.; Gatu Johnson, M.; Grim, G.; Hartouni, E.; Hurricane, O.; Kilkenny, J.; Knauer, J.; Ma, T.; Mannion, O.; Munro, D.; Sayre, D.; Spears, B.

    2015-11-01

    The method of moments was introduced by Pearson as a process for estimating the population distributions from which a set of ``random variables'' are measured. These moments are compared with a parameterization of the distributions, or of the same quantities generated by simulations of the process. Most diagnostics processes extract scalar parameters depending on the moments of spectra derived from analytic solutions to the fusion rate, necessarily based on simplifying assumptions of the confined plasma. The precision of the TOF spectra, and the nature of the implosions at the NIF require the inclusion of factors beyond the traditional analysis and require the addition of higher order moments to describe the data. This talk will present a diagnostic process for extracting the moments of the neutron energy spectrum for a comparison with theoretical considerations as well as simulations of the implosions. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  8. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    PubMed Central

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  9. A practical iterative PID tuning method for mechanical systems using parameter chart

    NASA Astrophysics Data System (ADS)

    Kang, M.; Cheong, J.; Do, H. M.; Son, Y.; Niculescu, S.-I.

    2017-10-01

    In this paper, we propose a method of iterative proportional-integral-derivative parameter tuning for mechanical systems that possibly possess hidden mechanical resonances, using a parameter chart which visualises the closed-loop characteristics in a 2D parameter space. We employ a hypothetical assumption that the considered mechanical systems have their upper limit of the derivative feedback gain, from which the feasible region in the parameter chart becomes fairly reduced and thus the gain selection can be extremely simplified. Then, a two-directional parameter search is carried out within the feasible region in order to find the best set of parameters. Experimental results show the validity of the assumption used and the proposed parameter tuning method.

  10. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  11. Consistency tests for the extraction of the Boer-Mulders and Sivers functions

    NASA Astrophysics Data System (ADS)

    Christova, E.; Leader, E.; Stoilov, M.

    2018-03-01

    At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less

  13. Understanding the LIGO GW150914 event

    NASA Astrophysics Data System (ADS)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao

    2016-08-01

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.

  14. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  15. Vertically-integrated Approaches for Carbon Sequestration Modeling

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.; Guo, B.

    2015-12-01

    Carbon capture and sequestration (CCS) is being considered as an approach to mitigate anthropogenic CO2 emissions from large stationary sources such as coal fired power plants and natural gas processing plants. Computer modeling is an essential tool for site design and operational planning as it allows prediction of the pressure response as well as the migration of both CO2 and brine in the subsurface. Many processes, such as buoyancy, hysteresis, geomechanics and geochemistry, can have important impacts on the system. While all of the processes can be taken into account simultaneously, the resulting models are computationally very expensive and require large numbers of parameters which are often uncertain or unknown. In many cases of practical interest, the computational and data requirements can be reduced by choosing a smaller domain and/or by neglecting or simplifying certain processes. This leads to a series of models with different complexity, ranging from coupled multi-physics, multi-phase three-dimensional models to semi-analytical single-phase models. Under certain conditions the three-dimensional equations can be integrated in the vertical direction, leading to a suite of two-dimensional multi-phase models, termed vertically-integrated models. These models are either solved numerically or simplified further (e.g., assumption of vertical equilibrium) to allow analytical or semi-analytical solutions. This presentation focuses on how different vertically-integrated models have been applied to the simulation of CO2 and brine migration during CCS projects. Several example sites, such as the Illinois Basin and the Wabamun Lake region of the Alberta Basin, are discussed to show how vertically-integrated models can be used to gain understanding of CCS operations.

  16. How to Decide on Modeling Details: Risk and Benefit Assessment.

    PubMed

    Özilgen, Mustafa

    Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.

  17. A cumulative energy demand indicator (CED), life cycle based, for industrial waste management decision making.

    PubMed

    Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba

    2013-12-01

    Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.

  18. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    NASA Astrophysics Data System (ADS)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that the simplified modeling approach using PWNP as a driving factor for the evaluation of N losses from drained agricultural catchments gave satisfactory results and we can propose this approach for a wider use.

  19. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  20. Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models

    USGS Publications Warehouse

    Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.

    2007-01-01

    each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.

  1. Prediction of the turbulent wake with second-order closure

    NASA Technical Reports Server (NTRS)

    Taulbee, D. B.; Lumley, J. L.

    1981-01-01

    A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.

  2. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  3. Multi-Mode 3D Kirchhoff Migration of Receiver Functions at Continental Scale With Applications to USArray

    NASA Astrophysics Data System (ADS)

    Millet, F.; Bodin, T.; Rondenay, S.

    2017-12-01

    The teleseismic scattered seismic wavefield contains valuable information about heterogeneities and discontinuities inside the Earth. By using fast Receiver Function (RF) migration techniques such as classic Common Conversion Point (CCP) stacks, one can easily interpret structural features down to a few hundred kilometers in the mantle. However, strong simplifying 1D assumptions limit the scope of these methods to structures that are relatively planar and sub-horizontal at local-to-regional scales, such as the Lithosphere-Asthenosphere Boundary and the Mantle Transition Zone discontinuities. Other more robust 2D and 2.5D methods rely on fewer assumptions but require considerable, sometime prohibitive, computation time. Following the ideas of Cheng (2017), we have implemented a simple fully 3D Prestack Kirchhoff RF migration scheme which uses the FM3D fast Eikonal solver to compute travel times and scattering angles. The method accounts for 3D elastic point scattering and includes free surface multiples, resulting in enhanced images of laterally varying dipping structures, such as subducted slabs. The method is tested for subduction structures using 2.5D synthetics generated with Raysum and 3D synthetics generated with specfem3D. Results show that dip angles, depths and lateral variations can be recovered almost perfectly. The approach is ideally suited for applications to dense regional datasets, including those collected across the Cascadia and Alaska subduction zones by USArray.

  4. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  5. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  6. Design, dynamics and control of an Adaptive Singularity-Free Control Moment Gyroscope actuator for microspacecraft Attitude Determination and Control System

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sasi Prabhakaran

    Design, dynamics, control and implementation of a novel spacecraft attitude control actuator called the "Adaptive Singularity-free Control Moment Gyroscope" (ASCMG) is presented in this dissertation. In order to construct a comprehensive attitude dynamics model of a spacecraft with internal actuators, the dynamics of a spacecraft with an ASCMG, is obtained in the framework of geometric mechanics using the principles of variational mechanics. The resulting dynamics is general and complete model, as it relaxes the simplifying assumptions made in prior literature on Control Moment Gyroscopes (CMGs) and it also addresses the adaptive parameters in the dynamics formulation. The simplifying assumptions include perfect axisymmetry of the rotor and gimbal structures, perfect alignment of the centers of mass of the gimbal and the rotor etc. These set of simplifying assumptions imposed on the design and dynamics of CMGs leads to adverse effects on their performance and results in high manufacturing cost. The dynamics so obtained shows the complex nonlinear coupling between the internal degrees of freedom associated with an ASCMG and the spacecraft bus's attitude motion. By default, the general ASCMG cluster can function as a Variable Speed Control Moment Gyroscope, and reduced to function in CMG mode by spinning the rotor at constant speed, and it is shown that even when operated in CMG mode, the cluster can be free from kinematic singularities. This dynamics model is then extended to include the effects of multiple ASCMGs placed in the spacecraft bus, and sufficient conditions for non-singular ASCMG cluster configurations are obtained to operate the cluster both in VSCMG and CMG modes. The general dynamics model of the ASCMG is then reduced to that of conventional VSCMGs and CMGs by imposing the standard set of simplifying assumptions used in prior literature. The adverse effects of the simplifying assumptions that lead to the complexities in conventional CMG design, and how they lead to CMG singularities, are described. General ideas on control of the angular momentum of the spacecraft using changes in the momentum variables of a finite number of ASCMGs, are provided. Control schemes for agile and precise attitude maneuvers using ASCMG cluster in the absence of external torques and when the total angular momentum of the spacecraft is zero, is presented for both constant speed and variable speed modes. A Geometric Variational Integrator (GVI) that preserves the geometry of the state space and the conserved norm of the total angular momentum is constructed for numerical simulation and microcontroller implementation of the control scheme. The GVI is obtained by discretizing the Lagrangian of the rnultibody systems, in which the rigid body attitude is globally represented on the Lie group of rigid body rotations. Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on commercial smartphones and a bare minimum hardware prototype of an ASCMG using low cost COTS components is also described. A lightweight, dynamics model-free Variational Attitude Estimator (VAE) suitable for smartphone implementation is employed for attitude determination and the attitude control is performed by ASCMG actuators. The VAE scheme presented here is implemented and validated onboard an Unmanned Aerial Vehicle (UAV) platform and the real time performance is analyzed. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The mechatronics realization of the attitude determination through variational attitude estimation scheme and control implementation using ASCMG actuators are presented here. Experimental results of the attitude estimation (filtering) scheme using smartphone sensors as an Inertial Measurement Unit (IMU) on the Hardware In the Loop (HIL) simulator testbed are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at New Mexico State University, demonstrate the performance of this estimation scheme with the noisy raw data from the smartphone sensors. Keywords: Spacecraft, momentum exchange devices, control moment gyroscope, variational mechanics, geometric mechanics, variational integrators, attitude determination, attitude control, ADCS, estimation, ASCMG, VSCMG, cubesat, mechatronics, smartphone, Android, MEMS sensor, embedded programming, microcontroller, brushless DC drives, HIL simulation.

  7. Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Harding, D. D.

    1977-01-01

    The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.

  8. Fitness extraction and the conceptual foundations of political biology.

    PubMed

    Boari, Mircea

    2005-01-01

    In well known formulations, political science, classical and neoclassical economics, and political economy have recognized as foundational a human impulse toward self-preservation. To employ this concept, modern social-sciences theorists have made simplifying assumptions about human nature and have then built elaborately upon their more incisive simplifications. Advances in biology, including advances in evolutionary theory, notably inclusive-fitness theory, have for decades now encouraged the reconsideration of such assumptions and, more ambitiously, the reconciliation of the social and life sciences. I ask if this reconciliation is feasible and test a path to the unification of politics and biology, called here "political biology." Two new notions, "fitness extraction" and "fitness exchange," are defined, then differentiated from each other, and lastly contrasted to cooperative gaming, the putative essential element of economics.

  9. Rearchitecting IT: Simplify. Simplify

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2006-01-01

    Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…

  10. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  11. Characterizing dark matter at the LHC in Drell-Yan events

    NASA Astrophysics Data System (ADS)

    Capdevilla, Rodolfo M.; Delgado, Antonio; Martin, Adam; Raj, Nirmal

    2018-02-01

    Spectral features in LHC dileptonic events may signal radiative corrections coming from new degrees of freedom, notably dark matter and mediators. Using simplified models, and under a set of simplifying assumptions, we show how these features can reveal the fundamental properties of the dark sector, such as self-conjugation, spin and mass of dark matter, and the quantum numbers of the mediator. Distributions of both the invariant mass mℓℓ and the Collins-Soper scattering angle cos θCS are studied to pinpoint these properties. We derive constraints on the models from LHC measurements of mℓℓ and cos θCS, which are competitive with direct detection and jets+MET searches. We find that in certain scenarios the cos θCS spectrum provides the strongest bounds, underlining the importance of scattering angle measurements for nonresonant new physics.

  12. 48 CFR 13.003 - Policy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Policy. 13.003 Section 13... CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 13.003 Policy. (a) Agencies shall use simplified...). This policy does not apply if an agency can meet its requirement using— (1) Required sources of supply...

  13. A model for the rapid assessment of the impact of aviation noise near airports.

    PubMed

    Torija, Antonio J; Self, Rod H; Flindell, Ian H

    2017-02-01

    This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.

  14. Understanding young stars - A history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stahler, S.W.

    1988-12-01

    The history of pre-main-sequence theory is briefly reviewed. The paper of Henyey et al. (1955) is seen as an important transitional work, one which abandoned previous simplifying assumptions yet failed to incorporate newer insights into the surface structure of late-type stars. The subsequent work of Hayashi and his contemporaries is outlined, with an emphasis on the underlying physical principles. Finally, the recent impact of protostar theory is discussed, and speculations are offered on future developments. 56 references.

  15. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  16. On numerical modeling of one-dimensional geothermal histories

    USGS Publications Warehouse

    Haugerud, R.A.

    1989-01-01

    Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.

  17. The Boltzmann equation in the difference formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szoke, Abraham; Brooks III, Eugene D.

    2015-05-06

    First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.

  18. Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model

    DTIC Science & Technology

    2009-08-01

    has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System

  19. Gas Diffusion in Fluids Containing Bubbles

    NASA Technical Reports Server (NTRS)

    Zak, M.; Weinberg, M. C.

    1982-01-01

    Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.

  20. Edemagenic gain and interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A

    2008-02-01

    Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.

  1. Determination of mechanical loading components of the equine metacarpus from measurements of strain during walking.

    PubMed

    Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S

    2006-08-01

    The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.

  2. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  3. Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    PubMed

    Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O

    2016-06-06

    The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches.

  4. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  5. Validating a biometric authentication system: sample size requirements.

    PubMed

    Dass, Sarat C; Zhu, Yongfang; Jain, Anil K

    2006-12-01

    Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.

  6. Magnetohydrodynamic and gasdynamic theories for planetary bow waves

    NASA Technical Reports Server (NTRS)

    Spreiter, J. R.; Stahara, S. S.

    1984-01-01

    The observed properties of bow waves and the associated plasma flows are outlined, along with those features identified that can be described by a continuum magnetohydrodynamic flow theory as opposed to a more detailed multicomponent particle and field plasma theory. The primary objectives are to provide an account of the fundamental concepts and current status of the magnetohydrodynamic and gas dynamic theories for solar wind flow past planetary bodies. This includes a critical examination of: (1) the fundamental assumptions of the theories; (2) the various simplifying approximations introduced to obtain tractable mathematical problems; (3) the limitations they impose on the results; and (4) the relationship between the results of the simpler gas dynamic-frozen field theory and the more accurate but less completely worked out magnetohydrodynamic theory. Representative results of the various theories are presented and compared. A number of deficiencies, ambiguities, and suggestions for improvements are discussed, and several significant extensions of the theory required to provide comparable results for all planets, their satellites, and comets are noted.

  7. An horizon scan of biogeography

    PubMed Central

    2014-01-01

    The opportunity to reflect broadly on the accomplishments, prospects, and reach of a field may present itself relatively infrequently. Each biennial meeting of the International Biogeography Society showcases ideas solicited and developed largely during the preceding year, by individuals or teams from across the breadth of the discipline. Here, we highlight challenges, developments, and opportunities in biogeography from that biennial synthesis. We note the realized and potential impact of rapid data accumulation in several fields, a renaissance for inter-disciplinary research, the importance of recognizing the evolution–ecology continuum across spatial and temporal scales and at different taxonomic, phylogenetic and functional levels, and re-exploration of classical assumptions and hypotheses using new tools. However, advances are taxonomically and geographically biased, and key theoretical frameworks await tools to handle, or strategies to simplify, the biological complexity seen in empirical systems. Current threats to biodiversity require unprecedented integration of knowledge and development of predictive capacity that may enable biogeography to unite its descriptive and hypothetico-deductive branches and establish a greater role within and outside academia. PMID:24707348

  8. A survey of wheel-rail contact models for rail vehicles

    NASA Astrophysics Data System (ADS)

    Meymand, Sajjad Z.; Keylin, Alexander; Ahmadian, Mehdi

    2016-03-01

    Accurate and efficient contact models for wheel-rail interaction are essential for the study of the dynamic behaviour of a railway vehicle. Assessment of the contact forces and moments, as well as contact geometry provide a fundamental foundation for such tasks as design of braking and traction control systems, prediction of wheel and rail wear, and evaluation of ride safety and comfort. This paper discusses the evolution and the current state of the theories for solving the wheel-rail contact problem for rolling stock. The well-known theories for modelling both normal contact (Hertzian and non-Hertzian) and tangential contact (Kalker's linear theory, FASTSIM, CONTACT, Polach's theory, etc.) are reviewed. The paper discusses the simplifying assumptions for developing these models and compares their functionality. The experimental studies for evaluation of contact models are also reviewed. This paper concludes with discussing open areas in contact mechanics that require further research for developing better models to represent the wheel-rail interaction.

  9. Dyadic Green's function of an eccentrically stratified sphere.

    PubMed

    Moneda, Angela P; Chrissoulidis, Dimitrios P

    2014-03-01

    The electric dyadic Green's function (dGf) of an eccentrically stratified sphere is built by use of the superposition principle, dyadic algebra, and the addition theorem of vector spherical harmonics. The end result of the analytical formulation is a set of linear equations for the unknown vector wave amplitudes of the dGf. The unknowns are calculated by truncation of the infinite sums and matrix inversion. The theory is exact, as no simplifying assumptions are required in any one of the analytical steps leading to the dGf, and it is general in the sense that any number, position, size, and electrical properties can be considered for the layers of the sphere. The point source can be placed outside of or in any lossless part of the sphere. Energy conservation, reciprocity, and other checks verify that the dGf is correct. A numerical application is made to a stratified sphere made of gold and glass, which operates as a lens.

  10. An infectious way to teach students about outbreaks.

    PubMed

    Cremin, Íde; Watson, Oliver; Heffernan, Alastair; Imai, Natsuko; Ahmed, Norin; Bivegete, Sandra; Kimani, Teresia; Kyriacou, Demetris; Mahadevan, Preveina; Mustafa, Rima; Pagoni, Panagiota; Sophiea, Marisa; Whittaker, Charlie; Beacroft, Leo; Riley, Steven; Fisher, Matthew C

    2018-06-01

    The study of infectious disease outbreaks is required to train today's epidemiologists. A typical way to introduce and explain key epidemiological concepts is through the analysis of a historical outbreak. There are, however, few training options that explicitly utilise real-time simulated stochastic outbreaks where the participants themselves comprise the dataset they subsequently analyse. In this paper, we present a teaching exercise in which an infectious disease outbreak is simulated over a five-day period and subsequently analysed. We iteratively developed the teaching exercise to offer additional insight into analysing an outbreak. An R package for visualisation, analysis and simulation of the outbreak data was developed to accompany the practical to reinforce learning outcomes. Computer simulations of the outbreak revealed deviations from observed dynamics, highlighting how simplifying assumptions conventionally made in mathematical models often differ from reality. Here we provide a pedagogical tool for others to use and adapt in their own settings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  12. Evaluation of simplified stream-aquifer depletion models for water rights administration

    USGS Publications Warehouse

    Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.

    1995-01-01

    We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.

  13. HZETRN: Description of a free-space ion and nucleon transport and shielding computer program

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Cucinotta, Francis A.; Shinn, Judy L.; Badhwar, Gautam D.; Silberberg, R.; Tsao, C. H.; Townsend, Lawrence W.; Tripathi, Ram K.

    1995-01-01

    The high-charge-and energy (HZE) transport computer program HZETRN is developed to address the problems of free-space radiation transport and shielding. The HZETRN program is intended specifically for the design engineer who is interested in obtaining fast and accurate dosimetric information for the design and construction of space modules and devices. The program is based on a one-dimensional space-marching formulation of the Boltzmann transport equation with a straight-ahead approximation. The effect of the long-range Coulomb force and electron interaction is treated as a continuous slowing-down process. Atomic (electronic) stopping power coefficients with energies above a few A MeV are calculated by using Bethe's theory including Bragg's rule, Ziegler's shell corrections, and effective charge. Nuclear absorption cross sections are obtained from fits to quantum calculations and total cross sections are obtained with a Ramsauer formalism. Nuclear fragmentation cross sections are calculated with a semiempirical abrasion-ablation fragmentation model. The relation of the final computer code to the Boltzmann equation is discussed in the context of simplifying assumptions. A detailed description of the flow of the computer code, input requirements, sample output, and compatibility requirements for non-VAX platforms are provided.

  14. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  15. An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.

    2016-01-01

    The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using the analysis from the Mars Science Laboratory's terminal descent sensing model. Alternate uses will also be shown for determining horizon maps and orbiter set times.

  16. 46 CFR 178.320 - Intact stability requirements-non-sailing vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following vessels may undergo the simplified stability proof test detailed in § 178.330 of this part, in the... this part, a self-propelled pontoon vessel may undergo the pontoon simplified stability proof test... deck cargo, and is otherwise eligible to undergo the simplified stability proof test detailed in § 178...

  17. 46 CFR 178.320 - Intact stability requirements-non-sailing vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following vessels may undergo the simplified stability proof test detailed in § 178.330 of this part, in the... this part, a self-propelled pontoon vessel may undergo the pontoon simplified stability proof test... deck cargo, and is otherwise eligible to undergo the simplified stability proof test detailed in § 178...

  18. 46 CFR 178.320 - Intact stability requirements-non-sailing vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following vessels may undergo the simplified stability proof test detailed in § 178.330 of this part, in the... this part, a self-propelled pontoon vessel may undergo the pontoon simplified stability proof test... deck cargo, and is otherwise eligible to undergo the simplified stability proof test detailed in § 178...

  19. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  20. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  1. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  2. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  3. Tax Subsidies for Employer-Sponsored Health Insurance: Updated Microsimulation Estimates and Sensitivity to Alternative Incidence Assumptions

    PubMed Central

    Miller, G Edward; Selden, Thomas M

    2013-01-01

    Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400

  4. Deep Borehole Field Test Requirements and Controlled Assumptions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less

  5. SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012

    DTIC Science & Technology

    2012-01-01

    return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a

  6. Analysis of cavitation bubble dynamics in a liquid

    NASA Technical Reports Server (NTRS)

    Fontenot, L. L.; Lee, Y. C.

    1971-01-01

    General differential equations governing the dynamics of the cavitation bubbles in a liquid were derived. With the assumption of spherical symmetry the governing equations were simplified. Closed form solutions were obtained for simple cases, and numerical solutions were calculated for complicated ones. The growth and the collapse of the bubble were analyzed, oscillations of the bubbles were studied, and the stability of the cavitation bubbles were investigated. The results show that the cavitation bubbles are unstable, and the oscillation is not sinusoidal.

  7. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  8. The global strong solutions of Hasegawa-Mima-Charney-Obukhov equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hongjun; Zhu Anyou

    2005-08-01

    The quasigeostrophic model is a simplified geophysical fluid model at asymptotically high rotation rate or at small Rossby number. We consider the quasigeostrophic equation with no dissipation term which was obtained as an asymptotic model from the Euler equations with free surface under a quasigeostrophic velocity field assumption. It is called the Hasegawa-Mima-Charney-Obukhov equation, which also arises from plasmas theory. We use a priori estimates to get the global existence of strong solutions for an Hasegawa-Mima-Charney-Obukhov equation.

  9. Monitored Geologic Repository Project Description Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. M. Curry

    2001-01-30

    The primary objective of the Monitored Geologic Repository Project Description Document (PDD) is to allocate the functions, requirements, and assumptions to the systems at Level 5 of the Civilian Radioactive Waste Management System (CRWMS) architecture identified in Section 4. It provides traceability of the requirements to those contained in Section 3 of the ''Monitored Geologic Repository Requirements Document'' (MGR RD) (YMP 2000a) and other higher-level requirements documents. In addition, the PDD allocates design related assumptions to work products of non-design organizations. The document provides Monitored Geologic Repository (MGR) technical requirements in support of design and performance assessment in preparing formore » the Site Recommendation (SR) and License Application (LA) milestones. The technical requirements documented in the PDD are to be captured in the System Description Documents (SDDs) which address each of the systems at Level 5 of the CRWMS architecture. The design engineers obtain the technical requirements from the SDDs and by reference from the SDDs to the PDD. The design organizations and other organizations will obtain design related assumptions directly from the PDD. These organizations may establish additional assumptions for their individual activities, but such assumptions are not to conflict with the assumptions in the PDD. The PDD will serve as the primary link between the technical requirements captured in the SDDs and the design requirements captured in US Department of Energy (DOE) documents. The approved PDD is placed under Level 3 baseline control by the CRWMS Management and Operating Contractor (M and O) and the following portions of the PDD constitute the Technical Design Baseline for the MGR: the design characteristics listed in Table 1-1, the MGR Architecture (Section 4.1), the Technical Requirements (Section 5), and the Controlled Project Assumptions (Section 6).« less

  10. 77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...

  11. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Purchases at or under the simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION GENERAL CONTRACTING REQUIREMENTS TAXES Contract Clauses 529.401-70 Purchases at or under the simplified acquisitio...

  12. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Purchases at or under the simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION GENERAL CONTRACTING REQUIREMENTS TAXES Contract Clauses 529.401-70 Purchases at or under the simplified acquisitio...

  13. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Purchases at or under the simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION GENERAL CONTRACTING REQUIREMENTS TAXES Contract Clauses 529.401-70 Purchases at or under the simplified acquisitio...

  14. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Purchases at or under the simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION GENERAL CONTRACTING REQUIREMENTS TAXES Contract Clauses 529.401-70 Purchases at or under the simplified acquisitio...

  15. Simplifier: a web tool to eliminate redundant NGS contigs.

    PubMed

    Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur

    2012-01-01

    Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.

  16. 49 CFR 17.12 - How may a state simplify, consolidate, or substitute federally required state plans?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false How may a state simplify, consolidate, or substitute federally required state plans? 17.12 Section 17.12 Transportation Office of the Secretary of Transportation INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF TRANSPORTATION PROGRAMS AND ACTIVITIES § 17.12 How may a...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozluk, M.J.; Vijay, D.K.

    Postulated catastrophic rupture of high-energy piping systems is the fundamental criterion used for the safety design basis of both light and heavy water nuclear generating stations. Historically, the criterion has been applied by assuming a nonmechanistic instantaneous double-ended guillotine rupture of the largest diameter pipes inside of containment. Nonmechanistic, meaning that the assumption of an instantaneous guillotine rupture has not been based on stresses in the pipe, failure mechanisms, toughness of the piping material, nor the dynamics of the ruptured pipe ends as they separate. This postulated instantaneous double-ended guillotine rupture of a pipe was a convenient simplifying assumption thatmore » resulted in a conservative accident scenario. This conservative accident scenario has now become entrenched as the design basis accident for: containment design, shutdown system design, emergency fuel cooling systems design, and to establish environmental qualification temperature and pressure conditions. The requirement to address dynamic effects associated with the postulated pipe rupture subsequently evolved. The dynamic effects include: potential missiles, pipe whipping, blowdown jets, and thermal-hydraulic transients. Recent advances in fracture mechanics research have demonstrated that certain pipes under specific conditions cannot crack in ways that result in an instantaneous guillotine rupture. Canadian utilities are now using mechanistic fracture mechanics and leak-before-break assessments on a case-by-case basis, in limited applications, to support licensing cases which seek exemption from the need to consider the various dynamic effects associated with postulated instantaneous catastrophic rupture of high-energy piping systems inside and outside of containment.« less

  18. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  19. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  20. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  1. Fission product ion exchange between zeolite and a molten salt

    NASA Astrophysics Data System (ADS)

    Gougar, Mary Lou D.

    The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)

  2. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R; Cheung, A; Anderson, R

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less

  3. Snow Physics and Meltwater Hydrology of the SSiB Model Employed for Climate Simulation Studies with GEOS 2 GCM

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.

  4. Rethinking Use of the OML Model in Electric Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  5. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  6. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  7. Quantum-like dynamics applied to cognition: a consideration of available options

    NASA Astrophysics Data System (ADS)

    Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.

    2017-10-01

    Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  8. On the Weyl anomaly of 4D conformal higher spins: a holographic approach

    NASA Astrophysics Data System (ADS)

    Acevedo, S.; Aros, R.; Bugini, F.; Diaz, D. E.

    2017-11-01

    We present a first attempt to derive the full (type-A and type-B) Weyl anomaly of four dimensional conformal higher spin (CHS) fields in a holographic way. We obtain the type-A and type-B Weyl anomaly coefficients for the whole family of 4D CHS fields from the one-loop effective action for massless higher spin (MHS) Fronsdal fields evaluated on a 5D bulk Poincaré-Einstein metric with an Einstein metric on its conformal boundary. To gain access to the type-B anomaly coefficient we assume, for practical reasons, a Lichnerowicz-type coupling of the bulk Fronsdal fields with the bulk background Weyl tensor. Remarkably enough, our holographic findings under this simplifying assumption are certainly not unknown: they match the results previously found on the boundary counterpart under the assumption of factorization of the CHS higher-derivative kinetic operator into Laplacians of "partially massless" higher spins on Einstein backgrounds.

  9. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  10. Nonlinear Curvature Expressions for Combined Flapwise Bending, Chordwise Bending, Torsion and Extension of Twisted Rotor Blades

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Kaza, K. R. V.

    1976-01-01

    The nonlinear curvature expressions for a twisted rotor blade or a beam undergoing transverse bending in two planes, torsion, and extension were developed. The curvature expressions were obtained using simple geometric considerations. The expressions were first developed in a general manner using the geometrical nonlinear theory of elasticity. These general nonlinear expressions were then systematically reduced to four levels of approximation by imposing various simplifying assumptions, and in each of these levels the second degree nonlinear expressions were given. The assumptions were carefully stated and their implications with respect to the nonlinear theory of elasticity as applied to beams were pointed out. The transformation matrices between the deformed and undeformed blade-fixed coordinates, which were needed in the development of the curvature expressions, were also given for three of the levels of approximation. The present curvature expressions and transformation matrices were compared with corresponding expressions existing in the literature.

  11. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  12. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  13. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  14. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  15. 48 CFR 2419.803-70 - Procedures for simplified acquisitions under the partnership agreement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Business Administration Section (8)(a) Program 2419.803-70 Procedures for simplified acquisitions under the... are required. (2) The contracting officer will use the Central Contractor Registration (CCR) database...

  16. 48 CFR 2419.803-70 - Procedures for simplified acquisitions under the partnership agreement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Business Administration Section (8)(a) Program 2419.803-70 Procedures for simplified acquisitions under the... are required. (2) The contracting officer will use the Central Contractor Registration (CCR) database...

  17. Evolution of Requirements and Assumptions for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  18. Integrating the social sciences to understand human-water dynamics

    NASA Astrophysics Data System (ADS)

    Carr, G.; Kuil, L., Jr.

    2017-12-01

    Many interesting and exciting socio-hydrological models have been developed in recent years. Such models often aim to capture the dynamic interplay between people and water for a variety of hydrological settings. As such, peoples' behaviours and decisions are brought into the models as drivers of and/or respondents to the hydrological system. To develop and run such models over a sufficiently long time duration to observe how the water-human system evolves the human component is often simplified according to one or two key behaviours, characteristics or decisions (e.g. a decision to move away from a drought or flood area; a decision to pump groundwater, or a decision to plant a less water demanding crop). To simplify the social component, socio-hydrological modellers often pull knowledge and understanding from existing social science theories. This requires them to negotiate complex territory, where social theories may be underdeveloped, contested, dynamically evolving, or case specific and difficult to generalise or upscale. A key question is therefore, how can this process be supported so that the resulting socio-hydrological models adequately describe the system and lead to meaningful understanding of how and why it behaves as it does? Collaborative interdisciplinary research teams that bring together social and natural scientists are likely to be critical. Joint development of the model framework requires specific attention to clarification to expose all underlying assumptions, constructive discussion and negotiation to reach agreement on the modelled system and its boundaries. Mutual benefits to social scientists can be highlighted, i.e. socio-hydrological work can provide insights for further exploring and testing social theories. Collaborative work will also help ensure underlying social theory is made explicit, and may identify ways to include and compare multiple theories. As socio-hydrology progresses towards supporting policy development, approaches that brings in stakeholders and non-scientist participants to develop the conceptual modelling framework will become essential. They are also critical for fully understanding human-water dynamics.

  19. Normalized lift: an energy interpretation of the lift coefficient simplifies comparisons of the lifting ability of rotating and flapping surfaces.

    PubMed

    Burgers, Phillip; Alexander, David E

    2012-01-01

    For a century, researchers have used the standard lift coefficient C(L) to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv(2), where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders.This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S), compared against the total kinetic energy required for generating said lift, ½v(2). This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran.The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings.

  20. Normalized Lift: An Energy Interpretation of the Lift Coefficient Simplifies Comparisons of the Lifting Ability of Rotating and Flapping Surfaces

    PubMed Central

    Burgers, Phillip; Alexander, David E.

    2012-01-01

    For a century, researchers have used the standard lift coefficient CL to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv 2, where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders. This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S), compared against the total kinetic energy required for generating said lift, ½v2. This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran. The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings. PMID:22629326

  1. 48 CFR 32.003 - Simplified acquisition procedures financing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures financing. 32.003 Section 32.003 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING 32.003 Simplified acquisition procedures financing. Unless agency regulations otherwise permit, contract financing shall not be provided for...

  2. The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.

    I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.

  3. Design data needs modular high-temperature gas-cooled reactor. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1987-03-01

    The Design Data Needs (DDNs) provide summary statements for program management, of the designer`s need for experimental data to confirm or validate assumptions made in the design. These assumptions were developed using the Integrated Approach and are tabulated in the Functional Analysis Report. These assumptions were also necessary in the analyses or trade studies (A/TS) to develop selections of hardware design or design requirements. Each DDN includes statements providing traceability to the function and the associated assumption that requires the need.

  4. Predator-prey Encounter Rates in Turbulent Environments: Consequences of Inertia Effects and Finite Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecseli, H. L.; Trulsen, J.

    2009-10-08

    Experimental as well as theoretical studies have demonstrated that turbulence can play an important role for the biosphere in marine environments, in particular also by affecting prey-predator encounter rates. Reference models for the encounter rates rely on simplifying assumptions of predators and prey being described as point particles moving passively with the local flow velocity. Based on simple arguments that can be tested experimentally we propose corrections for the standard expression for the encounter rates, where now finite sizes and Stokes drag effects are included.

  5. Calculation of load distribution in stiffened cylindrical shells

    NASA Technical Reports Server (NTRS)

    Ebner, H; Koller, H

    1938-01-01

    Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.

  6. Orbital geocentric oddness. (French Title: Bizarreries orbitales géocentriques)

    NASA Astrophysics Data System (ADS)

    Bassinot, E.

    2013-09-01

    The purpose of this essay is to determine the geocentric path of our superior neighbour, the planet Mars called like the God of the war.In other words,the question is : seen from our blue planet, what is the orbit of the red one? Based upon three simplifying and justified assumptions,it is proved hereunder with a purely geometrical approach,that Mars describes a curve very close to the well known Pascal's snail. The loop shown by this curve explains easily the apparently erratic behaviour of Mars.

  7. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  8. Aerodynamic effects of nearly uniform slipstreams on thin wings in the transonic regime

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1980-01-01

    A simplified model is used to describe the interaction between a propeller slipstream and a wing in the transonic regime. The undisturbed slipstream boundary is assumed to coincide with an infinite circular cylinder. The undisturbed slipstream velocity is rotational and is a function of the radius only. In general, the velocity perturbation caused by introducing a wing into the slipstream is also rotational. By making small disturbance assumptions, however, the perturbation velocity becomes nearly potential, and an approximation for the flow is obtained by solving a potential equation.

  9. Interplanetary magnetic flux - Measurement and balance

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Gosling, J. T.; Phillips, J. L.

    1992-01-01

    A new method for determining the approximate amount of magnetic flux in various solar wind structures in the ecliptic (and solar rotation) plane is developed using single-spacecraft measurements in interplanetary space and making certain simplifying assumptions. The method removes the effect of solar wind velocity variations and can be applied to specific, limited-extent solar wind structures as well as to long-term variations. Over the 18-month interval studied, the ecliptic plane flux of coronal mass ejections was determined to be about 4 times greater than that of HFDs.

  10. A study of trends and techniques for space base electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    The use of dry processing and alternate dielectrics for processing wafers is reported. A two dimensional modeling program was written for the simulation of short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide-silicon interface. In solving current continuity equation, the program does not converge. However, solving the two dimensional Poisson equation for the potential distribution was achieved. The status of other 2D MOSFET simulation programs are summarized.

  11. The effect of the behavior of an average consumer on the public debt dynamics

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  12. Calibration-free assays on standard real-time PCR devices

    PubMed Central

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-01-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545

  13. Calibration-free assays on standard real-time PCR devices

    NASA Astrophysics Data System (ADS)

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-03-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.

  14. 48 CFR 1532.003 - Simplified acquisition procedures financing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures financing. 1532.003 Section 1532.003 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING 1532.003 Simplified acquisition procedures financing. (a) Scope. This subpart provides for authorization of advance and interim payments on...

  15. A Saturnian cam current system driven by asymmetric thermospheric heating

    NASA Astrophysics Data System (ADS)

    Smith, C. G. A.

    2011-02-01

    We show that asymmetric heating of Saturn's thermosphere can drive a current system consistent with the magnetospheric ‘cam’ proposed by Espinosa, Southwood & Dougherty. A geometrically simple heating distribution is imposed on the Northern hemisphere of a simplified three-dimensional global circulation model of Saturn's thermosphere. Currents driven by the resulting winds are calculated using a globally averaged ionosphere model. Using a simple assumption about how divergences in these currents close by flowing along dipolar field lines between the Northern and Southern hemispheres, we estimate the magnetic field perturbations in the equatorial plane and show that they are broadly consistent with the proposed cam fields, showing a roughly uniform field implying radial and azimuthal components in quadrature. We also identify a small longitudinal phase drift in the cam current with radial distance as a characteristic of a thermosphere-driven current system. However, at present our model does not produce magnetic field perturbations of the required magnitude, falling short by a factor of ˜100, a discrepancy that may be a consequence of an incomplete model of the ionospheric conductance.

  16. Generalization of one-dimensional solute transport: A stochastic-convective flow conceptualization

    NASA Astrophysics Data System (ADS)

    Simmons, C. S.

    1986-04-01

    A stochastic-convective representation of one-dimensional solute transport is derived. It is shown to conceptually encompass solutions of the conventional convection-dispersion equation. This stochastic approach, however, does not rely on the assumption that dispersive flux satisfies Fick's diffusion law. Observable values of solute concentration and flux, which together satisfy a conservation equation, are expressed as expectations over a flow velocity ensemble, representing the inherent random processess that govern dispersion. Solute concentration is determined by a Lagrangian pdf for random spatial displacements, while flux is determined by an equivalent Eulerian pdf for random travel times. A condition for such equivalence is derived for steady nonuniform flow, and it is proven that both Lagrangian and Eulerian pdfs are required to account for specified initial and boundary conditions on a global scale. Furthermore, simplified modeling of transport is justified by proving that an ensemble of effectively constant velocities always exists that constitutes an equivalent representation. An example of how a two-dimensional transport problem can be reduced to a single-dimensional stochastic viewpoint is also presented to further clarify concepts.

  17. Influence of a non-uniform free stream velocity distribution on performance/acoustics of counterrotating propeller configurations

    NASA Astrophysics Data System (ADS)

    Allen, C. S.; Korkan, K. D.

    1991-01-01

    A methodology for predicting the performance and acoustics of counterrotating propeller configurations was modified to take into account the effects of a non-uniform free stream velocity distribution entering the disk plane. The method utilizes the analytical techniques of Lock and Theodorson as described by Davidson to determine the influence of the non-uniform free stream velocity distribution in the prediction of the steady aerodynamic loads. The unsteady load contribution is determined according to the procedure of Leseture with rigid helical tip vortices simulating the previous rotations of each propeller. The steady and unsteady loads are combined to obtain the total blade loading required for acoustic prediction employing the Ffowcs Williams-Hawking equation as simplified by Succi with the assumption of compact sources. The numerical method is used to redesign the previous commuter class counterrotating propeller configuration of Denner. The specifications, performance, and acoustics of the new design are compared with the results of Denner thereby determining the influence of the non-uniform free stream velocity distribution on these metrics.

  18. Dynamic Simulation of a Periodic 10 K Sorption Cryocooler

    NASA Technical Reports Server (NTRS)

    Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.

    1994-01-01

    A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.

  19. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  20. Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) Collocation Method for Solving Linear and Nonlinear Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.

    2018-05-01

    In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.

  1. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  2. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  3. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1994-01-01

    The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

  4. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  5. FDTD modeling of anisotropic nonlinear optical phenomena in silicon waveguides.

    PubMed

    Dissanayake, Chethiya M; Premaratne, Malin; Rukhlenko, Ivan D; Agrawal, Govind P

    2010-09-27

    A deep insight into the inherent anisotropic optical properties of silicon is required to improve the performance of silicon-waveguide-based photonic devices. It may also lead to novel device concepts and substantially extend the capabilities of silicon photonics in the future. In this paper, for the first time to the best of our knowledge, we present a three-dimensional finite-difference time-domain (FDTD) method for modeling optical phenomena in silicon waveguides, which takes into account fully the anisotropy of the third-order electronic and Raman susceptibilities. We show that, under certain realistic conditions that prevent generation of the longitudinal optical field inside the waveguide, this model is considerably simplified and can be represented by a computationally efficient algorithm, suitable for numerical analysis of complex polarization effects. To demonstrate the versatility of our model, we study polarization dependence for several nonlinear effects, including self-phase modulation, cross-phase modulation, and stimulated Raman scattering. Our FDTD model provides a basis for a full-blown numerical simulator that is restricted neither by the single-mode assumption nor by the slowly varying envelope approximation.

  6. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  7. Assessment of railway wagon suspension characteristics

    NASA Astrophysics Data System (ADS)

    Soukup, Josef; Skočilas, Jan; Skočilasová, Blanka

    2017-05-01

    The article deals with assessment of railway wagon suspension characteristics. The essential characteristics of a suspension are represented by the stiffness constants of the equivalent springs and the eigen frequencies of the oscillating movements in reference to the main central inertia axes of a vehicle. The premise of the experimental determination of these characteristic is the knowledge of the gravity center position and the knowledge of the main central inertia moments of the vehicle frame. The vehicle frame performs the general spatial movement when the vehicle moves. An analysis of the frame movement generally arises from Euler's equations which are commonly used for the description of the spherical movement. This solution is difficult and it can be simplified by applying the specific assumptions. The eigen frequencies solutions and solutions of the suspension stiffness are presented in the article. The solutions are applied on the railway and road vehicles with the simplifying conditions. A new method which assessed the characteristics is described in the article.

  8. 48 CFR 1332.003 - Simplified acquisition procedures financing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures financing. 1332.003 Section 1332.003 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING 1332.003 Simplified acquisition procedures financing. Contract financing may be provided for purchases made under the authority of FAR Part 13. Contract...

  9. 48 CFR 432.003 - Simplified acquisition procedures financing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures financing. 432.003 Section 432.003 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING 432.003 Simplified acquisition procedures financing. (a) The chief of the contracting office may approve contract financing on a contract to be...

  10. Non-driving intersegmental knee moments in cycling computed using a model that includes three-dimensional kinematics of the shank/foot and the effect of simplifying assumptions.

    PubMed

    Gregersen, Colin S; Hull, M L

    2003-06-01

    Assessing the importance of non-driving intersegmental knee moments (i.e. varus/valgus and internal/external axial moments) on over-use knee injuries in cycling requires the use of a three-dimensional (3-D) model to compute these loads. The objectives of this study were: (1) to develop a complete, 3-D model of the lower limb to calculate the 3-D knee loads during pedaling for a sample of the competitive cycling population, and (2) to examine the effects of simplifying assumptions on the calculations of the non-driving knee moments. The non-driving knee moments were computed using a complete 3-D model that allowed three rotational degrees of freedom at the knee joint, included the 3-D inertial loads of the shank/foot, and computed knee loads in a shank-fixed coordinate system. All input data, which included the 3-D segment kinematics and the six pedal load components, were collected from the right limb of 15 competitive cyclists while pedaling at 225 W and 90 rpm. On average, the peak varus and internal axial moments of 7.8 and 1.5 N m respectively occurred during the power stroke whereas the peak valgus and external axial moments of 8.1 and 2.5 N m respectively occurred during the recovery stroke. However, the non-driving knee moments were highly variable between subjects; the coefficients of variability in the peak values ranged from 38.7% to 72.6%. When it was assumed that the inertial loads of the shank/foot for motion out of the sagittal plane were zero, the root-mean-squared difference (RMSD) in the non-driving knee moments relative to those for the complete model was 12% of the peak varus/valgus moment and 25% of the peak axial moment. When it was also assumed that the knee joint was revolute with the flexion/extension axis perpendicular to the sagittal plane, the RMSD increased to 24% of the peak varus/valgus moment and 204% of the peak axial moment. Thus, the 3-D orientation of the shank segment has a major affect on the computation of the non-driving knee moments, while the inertial contributions to these loads for motions out of the sagittal plane are less important.

  11. Dimensionless erosion laws for cohesive sediment

    USGS Publications Warehouse

    Walder, Joseph S.

    2016-01-01

    A method of achieving a dimensionless collapse of erosion-rate data for cohesive sediments is proposed and shown to work well for data collected in flume-erosion tests on mixtures of sand and mud (silt plus clay sized particles) for a wide range of mud fraction. The data collapse corresponds to a dimensional erosion law of the form E∼(τ−τc)m">E∼(τ−τc)mE∼(τ−τc)m, where E">EE is erosion rate, τ">ττ is shear stress, τc">τcτc is the threshold shear stress for erosion to occur, and m≈7/4">m≈7/4m≈7/4. This result contrasts with the commonly assumed linear erosion law E=kd(τ−τc)">E=kd(τ−τc)E=kd(τ−τc), where kd">kdkd is a measure of how easily sediment is eroded. The data collapse prompts a re-examination of the way that results of the hole-erosion test (HET) and jet-erosion test (JET) are customarily analyzed, and also calls into question the meaningfulness not only of proposed empirical relationships between kd">kdkd and τc">τcτc, but also of the erodibility parameter kd">kdkd itself. Fuller comparison of flume-erosion data with hole-erosion and jet-erosion data will require revised analyses of the HET and JET that drop the assumption m=1">m=1m=1 and, in the case of the JET, certain simplifying assumptions about the mechanics of jet scour.

  12. Reservoir system expansion scheduling under conflicting interests - A Blue Nile application

    NASA Astrophysics Data System (ADS)

    Geressu, Robel; Harou, Julien

    2017-04-01

    New water resource developments are facing increasing resistance due to their real and perceived potential to affect existing systems' performance negatively. Hence, scheduling new dams in multi-reservoir systems requires considering conflicting performance objectives to minimize impacts, create consensus among wider stakeholder groups and avoid conflict. However, because of the large number of alternative expansion schedules, planning approaches often rely on simplifying assumptions such as the appropriate gap between expansion stages or less flexibility in reservoir release rules than what is possible. In this study, we investigate the extent to which these assumptions could limit our ability to find better performing alternatives. We apply a many-objective sequencing approach to the proposed Blue Nile hydropower reservoir system in Ethiopia to find best investment schedules and operating rules that maximize long-term discounted net benefits, downstream releases and energy generation during reservoir filling periods. The system is optimized using 30 realizations of stochastically generated streamflow data, statistically resembling the historical flow. Results take the form of Pareto-optimal trade-offs where each point on the curve or surface represents a combination of new reservoirs, their implementation dates and operating rules. Results show a significant relationship between detail in operating rule design (i.e., changing operating rules as the multi-reservoir expansion progresses) and the system performance. For the Blue Nile, failure to optimize operating rules in sufficient detail could result in underestimation of the net worth of the proposed investments by up to 6 billion USD if a development option with low downstream impact (slow filling of the reservoirs) is to be implemented.

  13. Using Heat Pulses for Quantifying 3d Seepage Velocity in Groundwater-Surface Water Interactions, Considering Source Size, Regime, and Dispersion

    NASA Astrophysics Data System (ADS)

    Zlotnik, V. A.; Tartakovsky, D. M.

    2017-12-01

    The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles

  14. Testing a thermo-chemo-hydro-geomechanical model for gas hydrate-bearing sediments using triaxial compression laboratory experiments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Helmig, R.; Wohlmuth, B.

    2017-09-01

    Natural gas hydrates are considered a potential resource for gas production on industrial scales. Gas hydrates contribute to the strength and stiffness of the hydrate-bearing sediments. During gas production, the geomechanical stability of the sediment is compromised. Due to the potential geotechnical risks and process management issues, the mechanical behavior of the gas hydrate-bearing sediments needs to be carefully considered. In this study, we describe a coupling concept that simplifies the mathematical description of the complex interactions occurring during gas production by isolating the effects of sediment deformation and hydrate phase changes. Central to this coupling concept is the assumption that the soil grains form the load-bearing solid skeleton, while the gas hydrate enhances the mechanical properties of this skeleton. We focus on testing this coupling concept in capturing the overall impact of geomechanics on gas production behavior though numerical simulation of a high-pressure isotropic compression experiment combined with methane hydrate formation and dissociation. We consider a linear-elastic stress-strain relationship because it is uniquely defined and easy to calibrate. Since, in reality, the geomechanical response of the hydrate-bearing sediment is typically inelastic and is characterized by a significant shear-volumetric coupling, we control the experiment very carefully in order to keep the sample deformations small and well within the assumptions of poroelasticity. The closely coordinated experimental and numerical procedures enable us to validate the proposed simplified geomechanics-to-flow coupling, and set an important precursor toward enhancing our coupled hydro-geomechanical hydrate reservoir simulator with more suitable elastoplastic constitutive models.

  15. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  16. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  17. Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments

    PubMed Central

    Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel

    2011-01-01

    There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593

  18. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to determine the contact patch dimensions.

  19. Numerical analysis of one-dimensional temperature data for groundwater/surface-water exchange with 1DTempPro

    NASA Astrophysics Data System (ADS)

    Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.

    2012-12-01

    Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.

  20. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  1. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  2. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.

  3. Automated Derivation of Complex System Constraints from User Requirements

    NASA Technical Reports Server (NTRS)

    Foshee, Mark; Murey, Kim; Marsh, Angela

    2010-01-01

    The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.

  4. 48 CFR 301.608 - Training requirements for purchase cardholders, Approving Officials, and Agency/Organization...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REGULATION SYSTEM Career Development, Contracting Authority, and Responsibilities 301.608 Training... CON 237). • Advanced simplified acquisition procedures or Appropriations law. Purchase card holders...). • Advanced simplified acquisition procedures or Appropriations law. • CON 100 (Shaping Smart Business...

  5. 48 CFR 301.608 - Training requirements for purchase cardholders, Approving Officials, and Agency/Organization...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REGULATION SYSTEM Career Development, Contracting Authority, and Responsibilities 301.608 Training... CON 237). • Advanced simplified acquisition procedures or Appropriations law. Purchase card holders...). • Advanced simplified acquisition procedures or Appropriations law. • CON 100 (Shaping Smart Business...

  6. Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?

    PubMed Central

    Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  7. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  8. Solubility of lovastatin in a family of six alcohols: Ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol.

    PubMed

    Nti-Gyabaah, J; Chmielowski, R; Chan, V; Chiew, Y C

    2008-07-09

    Accurate experimental determination of solubility of active pharmaceutical ingredients (APIs) in solvents and its correlation, for solubility prediction, is essential for rapid design and optimization of isolation, purification, and formulation processes in the pharmaceutical industry. An efficient material-conserving analytical method, with in-line reversed HPLC separation protocol, has been developed to measure equilibrium solubility of lovastatin in ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol between 279 and 313K. Fusion enthalpy DeltaH(fus), melting point temperature, Tm, and the differential molar heat capacity, DeltaC(P), were determined by differential scanning calorimetry (DSC) to be 43,136J/mol, 445.5K, and 255J/(molK), respectively. In order to use the regular solution equation, simplified assumptions have been made concerning DeltaC(P), specifically, DeltaC(P)=0, or DeltaC(P)=DeltaS. In this study, we examined the extent to which these assumptions influence the magnitude of the ideal solubility of lovastatin, and determined that both assumptions underestimate the ideal solubility of lovastatin. The solubility data was used with the calculated ideal solubility to obtain activity coefficients, which were then fitted to the van't Hoff-like regular solution equation. Examination of the plots indicated that both assumptions give erroneous excess enthalpy of solution, H(infinity), and hence thermodynamically inconsistent activity coefficients. The order of increasing ideality, or solubility of lovastatin was butanol>1-propanol>1-pentanol>1-hexanol>1-octanol.

  9. Coupled Solid Rocket Motor Ballistics and Trajectory Modeling for Higher Fidelity Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Ables, Brett

    2014-01-01

    Multi-stage launch vehicles with solid rocket motors (SRMs) face design optimization challenges, especially when the mission scope changes frequently. Significant performance benefits can be realized if the solid rocket motors are optimized to the changing requirements. While SRMs represent a fixed performance at launch, rapid design iterations enable flexibility at design time, yielding significant performance gains. The streamlining and integration of SRM design and analysis can be achieved with improved analysis tools. While powerful and versatile, the Solid Performance Program (SPP) is not conducive to rapid design iteration. Performing a design iteration with SPP and a trajectory solver is a labor intensive process. To enable a better workflow, SPP, the Program to Optimize Simulated Trajectories (POST), and the interfaces between them have been improved and automated, and a graphical user interface (GUI) has been developed. The GUI enables real-time visual feedback of grain and nozzle design inputs, enforces parameter dependencies, removes redundancies, and simplifies manipulation of SPP and POST's numerous options. Automating the analysis also simplifies batch analyses and trade studies. Finally, the GUI provides post-processing, visualization, and comparison of results. Wrapping legacy high-fidelity analysis codes with modern software provides the improved interface necessary to enable rapid coupled SRM ballistics and vehicle trajectory analysis. Low cost trade studies demonstrate the sensitivities of flight performance metrics to propulsion characteristics. Incorporating high fidelity analysis from SPP into vehicle design reduces performance margins and improves reliability. By flying an SRM designed with the same assumptions as the rest of the vehicle, accurate comparisons can be made between competing architectures. In summary, this flexible workflow is a critical component to designing a versatile launch vehicle model that can accommodate a volatile mission scope.

  10. Methods for the development of a bioregenerative life support system

    NASA Technical Reports Server (NTRS)

    Goldman, Michelle; Gomez, Shawn; Voorhees, Mike

    1990-01-01

    Presented here is a rudimentary approach to designing a life support system based on the utilization of plants and animals. The biggest stumbling block in the initial phases of developing a bioregenerative life support system is encountered in collecting and consolidating the data. If a database existed for the systems engineer so that he or she may have accurate data and a better understanding of biological systems in engineering terms, then the design process would be simplified. Also addressed is a means of evaluating the subsystems chosen. These subsystems are unified into a common metric, kilograms of mass, and normalized in relation to the throughput of a few basic elements. The initial integration of these subsystems is based on input/output masses and eventually balanced to a point of operation within the inherent performance ranges of the organisms chosen. At this point, it becomes necessary to go beyond the simplifying assumptions of simple mass relationships and further define for each organism the processes used to manipulate the throughput matter. Mainly considered here is the fact that these organisms perform input/output functions on differing timescales, thus establishing the need for buffer volumes or appropriate subsystem phasing. At each point in a systematic design it is necessary to disturb the system and discern its sensitivity to the disturbance. This can be done either through the introduction of a catastrophic failure or by applying a small perturbation to the system. One example is increasing the crew size. Here the wide range of performance characteristics once again shows that biological systems have an inherent advantage in responding to systemic perturbations. Since the design of any space-based system depends on mass, power, and volume requirements, each subsystem must be evaluated in these terms.

  11. Model development for nutrient loading estimates from paddy rice fields in Korea.

    PubMed

    Jeon, Ji-Hong; Yoon, Chun G; Ham, Jong-Hwa; Jung, Kwang-Wook

    2004-01-01

    A field experiment was performed to evaluate water and nutrient balances in paddy rice culture operations during 2001-2002. The water balance analysis indicated that about half (50-60%) of the total outflow was lost by surface drainage, with the remainder occurring by evapotranspiration (490-530 mm). The surface drainage from paddy fields was mainly caused by rainfall and forced-drainage, and in particular, the runoff during early rice culture periods depends more on the forced-drainage due to fertilization practices. Most of the total phosphorus (T-P) inflow was supplied by fertilization at transplanting, while the total nitrogen (T-N) inflow was supplied by the three fertilizations, precipitation. and from the upper paddy field, which comprised 13-33% of the total inflow. Although most of the nutrient outflow was attributed to plant uptake. nutrient loss by surface drainage was substantial, comprising 20% for T-N and 10% for T-P. Water and nutrient balances indicate that reduction of surface drainage from paddy rice fields is imperative for nonpoint source pollution control. The simplified computer model, PADDIMOD, was developed to simulate water and nutrient (T-N and T-P) behavior in the paddy rice field. The model predicts daily ponded water depth, surface drainage, and nutrient concentrations. It was formulated with a few equations and simplified assumptions, but its application and a model fitness test indicated that the simulation results reasonably matched the observed data. It is a simple and convenient planning model that could be used to evaluate BMPs of paddy rice fields alone or in combination with other complex watershed models. Application of the PADDIMOD to other paddy rice fields with different agricultural environments might require further calibration and validation.

  12. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  13. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  14. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  15. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  16. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  17. 23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...

  18. The Valuation of Scientific and Technical Experiments

    NASA Technical Reports Server (NTRS)

    Williams, F. E.

    1972-01-01

    Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.

  19. Uncertainty about fundamentals and herding behavior in the FOREX market

    NASA Astrophysics Data System (ADS)

    Kaltwasser, Pablo Rovira

    2010-03-01

    It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.

  20. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  1. Ferromagnetic effects for nanofluid venture through composite permeable stenosed arteries with different nanosize particles

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Mustafa, M. T.

    2015-07-01

    In the present article ferromagnetic field effects for copper nanoparticles for blood flow through composite permeable stenosed arteries is discussed. The copper nanoparticles for the blood flow with water as base fluid with different nanosize particles is not explored upto yet. The equations for the Cu-water nanofluid are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics are utilized.

  2. Thermal effectiveness of multiple shell and tube pass TEMA E heat exchangers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pignotti, A.; Tamborenea, P.I.

    1988-02-01

    The thermal effectiveness of a TEMAE shell-and-tube heat exchanger, with one shell pass and an arbitrary number of tube passes, is determined under the usual simplifying assumptions of perfect transverse mixing of the shell fluid, no phase change, and temperature independence of the heat capacity rates and the heat transfer coefficient. A purely algebraic solution is obtained for the effectiveness as a functions of the heat capacity rate ratio and the number of heat transfer units. The case with M shell passes and N tube passes is easily expressed in terms of the single-shell-pass case.

  3. Generalization of low pressure, gas-liquid, metastable sound speed to high pressures

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1981-01-01

    A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.

  4. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.

  5. Stability analysis of shallow wake flows

    NASA Astrophysics Data System (ADS)

    Kolyshkin, A. A.; Ghidaoui, M. S.

    2003-11-01

    Experimentally observed periodic structures in shallow (i.e. bounded) wake flows are believed to appear as a result of hydrodynamic instability. Previously published studies used linear stability analysis under the rigid-lid assumption to investigate the onset of instability of wakes in shallow water flows. The objectives of this paper are: (i) to provide a preliminary assessment of the accuracy of the rigid-lid assumption; (ii) to investigate the influence of the shape of the base flow profile on the stability characteristics; (iii) to formulate the weakly nonlinear stability problem for shallow wake flows and show that the evolution of the instability is governed by the Ginzburg Landau equation; and (iv) to establish the connection between weakly nonlinear analysis and the observed flow patterns in shallow wake flows which are reported in the literature. It is found that the relative error in determining the critical value of the shallow wake stability parameter induced by the rigid-lid assumption is below 10% for the practical range of Froude number. In addition, it is shown that the shape of the velocity profile has a large influence on the stability characteristics of shallow wakes. Starting from the rigid-lid shallow-water equations and using the method of multiple scales, an amplitude evolution equation for the most unstable mode is derived. The resulting equation has complex coefficients and is of Ginzburg Landau type. An example calculation of the complex coefficients of the Ginzburg Landau equation confirms the existence of a finite equilibrium amplitude, where the unstable mode evolves with time into a limit-cycle oscillation. This is consistent with flow patterns observed by Ingram & Chu (1987), Chen & Jirka (1995), Balachandar et al. (1999), and Balachandar & Tachie (2001). Reasonable agreement is found between the saturation amplitude obtained from the Ginzburg Landau equation under some simplifying assumptions and the numerical data of Grubi[sbreve]ic et al. (1995). Such consistency provides further evidence that experimentally observed structures in shallow wake flows may be described by the nonlinear Ginzburg Landau equation. Previous works have found similar consistency between the Ginzburg Landau model and experimental data for the case of deep (i.e. unbounded) wake flows. However, it must be emphasized that much more information is required to confirm the appropriateness of the Ginzburg Landau equation in describing shallow wake flows.

  6. A general numerical model for wave rotor analysis

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel W.

    1992-01-01

    Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

  7. Refracted arrival waves in a zone of silence from a finite thickness mixing layer.

    PubMed

    Suzuki, Takao; Lele, Sanjiva K

    2002-02-01

    Refracted arrival waves which propagate in the zone of silence of a finite thickness mixing layer are analyzed using geometrical acoustics in two dimensions. Here, two simplifying assumptions are made: (i) the mean flow field is transversely sheared, and (ii) the mean velocity and temperature profiles approach the free-stream conditions exponentially. Under these assumptions, ray trajectories are analytically solved, and a formula for acoustic pressure amplitude in the far field is derived in the high-frequency limit. This formula is compared with the existing theory based on a vortex sheet corresponding to the low-frequency limit. The analysis covers the dependence on the Mach number as well as on the temperature ratio. The results show that both limits have some qualitative similarities, but the amplitude in the zone of silence at high frequencies is proportional to omega(-1/2), while that at low frequencies is proportional to omega(-3/2), omega being the angular frequency of the source.

  8. Assessment of ecotoxicological risks related to depositing dredged materials from canals in northern France on soil.

    PubMed

    Perrodin, Yves; Babut, Marc; Bedell, Jean-Philippe; Bray, Marc; Clement, Bernard; Delolme, Cécile; Devaux, Alain; Durrieu, Claude; Garric, Jeanne; Montuelle, Bernard

    2006-08-01

    The implementation of an ecological risk assessment framework is presented for dredged material deposits on soil close to a canal and groundwater, and tested with sediment samples from canals in northern France. This framework includes two steps: a simplified risk assessment based on contaminant concentrations and a detailed risk assessment based on toxicity bioassays and column leaching tests. The tested framework includes three related assumptions: (a) effects on plants (Lolium perenne L.), (b) effects on aquatic organisms (Escherichia coli, Pseudokirchneriella subcapitata, Ceriodaphnia dubia, and Xenopus laevis) and (c) effects on groundwater contamination. Several exposure conditions were tested using standardised bioassays. According to the specific dredged material tested, the three assumptions were more or less discriminatory, soil and groundwater pollution being the most sensitive. Several aspects of the assessment procedure must now be improved, in particular assessment endpoint design for risks to ecosystems (e.g., integration of pollutant bioaccumulation), bioassay protocols and column leaching test design.

  9. Tests for the extraction of Boer-Mulders functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot; Stoilov, Michail

    2017-12-01

    At present, the Boer-Mulders (BM) functions are extracted from asymmetry data using the simplifying assumption of their proportionality to the Sivers functions for each quark flavour. Here we present two independent tests for this assumption. We subject COMPASS data on semi-inclusive deep inelastic scattering on the 〈cos ϕh 〉, 〈cos 2ϕh 〉 and Sivers asymmetries to these tests. Our analysis shows that the tests are satisfied with the available data if the proportionality constant is the same for all quark flavours, which does not correspond to the flavour dependence used in existing analyses. This suggests that the published information on the BM functions may be unreliable. The 〈cos ϕh 〉, 〈cos 2ϕh 〉 asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  10. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Wärme und Feuchte instationär Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  11. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Warme und Feuchte instationar Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  12. Calculation of wall effects of flow on a perforated wall with a code of surface singularities

    NASA Astrophysics Data System (ADS)

    Piat, J. F.

    1994-07-01

    Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).

  13. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  14. Accounting for age structure and spatial structure in eco-evolutionary analyses of a large, mobile vertebrate.

    PubMed

    Waples, Robin S; Scribner, Kim; Moore, Jennifer; Draheim, Hope; Etter, Dwayne; Boersen, Mark

    2018-04-14

    The idealized concept of a population is integral to ecology, evolutionary biology, and natural resource management. To make analyses tractable, most models adopt simplifying assumptions, which almost inevitably are violated by real species in nature. Here we focus on both demographic and genetic estimates of effective population size per generation (Ne), the effective number of breeders per year (Nb), and Wright's neighborhood size (NS) for black bears (Ursus americanus) that are continuously distributed in the northern lower peninsula of Michigan, USA. We illustrate practical application of recently-developed methods to account for violations of two common, simplifying assumptions about populations: 1) reproduction occurs in discrete generations, and 2) mating occurs randomly among all individuals. We use a 9-year harvest dataset of >3300 individuals, together with genetic determination of 221 parent-offspring pairs, to estimate male and female vital rates, including age-specific survival, age-specific fecundity, and age-specific variance in fecundity (for which empirical data are rare). We find strong evidence for overdispersed variance in reproductive success of same-age individuals in both sexes, and we show that constraints on litter size have a strong influence on results. We also estimate that another life-history trait that is often ignored (skip breeding by females) has a relatively modest influence, reducing Nb by 9% and increasing Ne by 3%. We conclude that isolation by distance depresses genetic estimates of Nb, which implicitly assume a randomly-mating population. Estimated demographic NS (100, based on parent-offspring dispersal) was similar to genetic NS (85, based on regression of genetic distance and geographic distance), indicating that the >36,000 km2 study area includes about 4-5 black-bear neighborhoods. Results from this expansive data set provide important insight into effects of violating assumptions when estimating evolutionary parameters for long-lived, free-ranging species. In conjunction with recently-developed analytical methodology, the ready availability of non-lethal DNA sampling methods and the ability to rapidly and cheaply survey many thousands of molecular markers should facilitate eco-evolutionary studies like this for many more species in nature.

  15. Control-oriented modeling and adaptive backstepping control for a nonminimum phase hypersonic vehicle.

    PubMed

    Ye, Linqi; Zong, Qun; Tian, Bailing; Zhang, Xiuyun; Wang, Fang

    2017-09-01

    In this paper, the nonminimum phase problem of a flexible hypersonic vehicle is investigated. The main challenge of nonminimum phase is the prevention of dynamic inversion methods to nonlinear control design. To solve this problem, we make research on the relationship between nonminimum phase and backstepping control, finding that a stable nonlinear controller can be obtained by changing the control loop on the basis of backstepping control. By extending the control loop to cover the internal dynamics in it, the internal states are directly controlled by the inputs and simultaneously serve as virtual control for the external states, making it possible to guarantee output tracking as well as internal stability. Then, based on the extended control loop, a simplified control-oriented model is developed to enable the applicability of adaptive backstepping method. It simplifies the design process and releases some limitations caused by direct use of the no simplified control-oriented model. Next, under proper assumptions, asymptotic stability is proved for constant commands, while bounded stability is proved for varying commands. The proposed method is compared with approximate backstepping control and dynamic surface control and is shown to have superior tracking accuracy as well as robustness from the simulation results. This paper may also provide a beneficial guidance for control design of other complex systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu

    2001-07-01

    A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.

  17. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  18. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.

  19. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  20. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  1. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  2. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  3. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  4. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  5. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  6. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability

    NASA Astrophysics Data System (ADS)

    Wu, Shanshan; Heberling, Matthew T.

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  7. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability.

    PubMed

    Wu, Shanshan; Heberling, Matthew T

    2016-04-01

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.

  8. Unique Results and Lessons Learned from the TSS Missions

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. The OML equation for electron current collection by a positively biased body is simply: I is approximately equal to A x j(sub eo) x 2/v??(phi)(exp ½) where A is the area of the body and phi is the electric potential on the body with respect to the plasma. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  9. Experimental quantification of the fluid dynamics in blood-processing devices through 4D-flow imaging: A pilot study on a real oxygenator/heat-exchanger module.

    PubMed

    Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto

    2018-02-08

    The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Study on low intensity aeration oxygenation model and optimization for shallow water

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi

    2018-02-01

    Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.

  11. Automated Derivation of Complex System Constraints from User Requirements

    NASA Technical Reports Server (NTRS)

    Muery, Kim; Foshee, Mark; Marsh, Angela

    2006-01-01

    International Space Station (ISS) payload developers submit their payload science requirements for the development of on-board execution timelines. The ISS systems required to execute the payload science operations must be represented as constraints for the execution timeline. Payload developers use a software application, User Requirements Collection (URC), to submit their requirements by selecting a simplified representation of ISS system constraints. To fully represent the complex ISS systems, the constraints require a level of detail that is beyond the insight of the payload developer. To provide the complex representation of the ISS system constraints, HOSC operations personnel, specifically the Payload Activity Requirements Coordinators (PARC), manually translate the payload developers simplified constraints into detailed ISS system constraints used for scheduling the payload activities in the Consolidated Planning System (CPS). This paper describes the implementation for a software application, User Requirements Integration (URI), developed to automate the manual ISS constraint translation process.

  12. 46 CFR 178.320 - Intact stability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... stability proof test in accordance with § 178.330 of this part in the presence of a Coast Guard marine inspector. (b) A pontoon vessel operating on protected waters must undergo a simplified stability proof test... cognizant OCMI may dispense with the simplified stability proof test in § 178.330 for a vessel carrying not...

  13. Directional Communication in Evolved Multiagent Teams

    DTIC Science & Technology

    2013-06-10

    decentralized localization proposed by Franchi et al. [9]. Overall, the significant advantage of directional communication over non- directional...reception benefits the evolution of communicating autonomous agents because it simplifies the language required to express positional information, which...systems. This paper hypothesizes that such directional reception benefits the evolution of communicating autonomous agents because it simplifies the

  14. Simplified failure sequence evaluation of reactor pressure vessel head corroding in-core instrumentation assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McVicker, J.P.; Conner, J.T.; Hasrouni, P.N.

    1995-11-01

    In-Core Instrumentation (ICI) assemblies located on a Reactor Pressure Vessel Head have a history of boric acid leakage. The acid tends to corrode the nuts and studs which fasten the flanges of the assembly, thereby compromising the assembly`s structural integrity. This paper provides a simplified practical approach in determining the likelihood of an undetected progressing assembly stud deterioration, which would lead to a catastrophic loss of reactor coolant. The structural behavior of the In-Core Instrumentation flanged assembly is modeled using an elastic composite section assumption, with the studs transmitting tension and the pressure sealing gasket experiencing compression. Using the abovemore » technique, one can calculate the flange relative deflection and the consequential coolant loss flow rate, as well as the stress in any stud. A solved real life example develops the expected failure sequence and discusses the exigency of leak detection for safe shutdown. In the particular case of Calvert Cliffs Nuclear Power Plant (CCNPP) it is concluded that leak detection occurs before catastrophic failure of the ICI flange assembly.« less

  15. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  16. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  17. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  18. Steady flow model user's guide

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.

    1984-07-01

    Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.

  19. A simplified building airflow model for agent concentration prediction.

    PubMed

    Jacques, David R; Smith, David A

    2010-11-01

    A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.

  20. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  1. Computer algorithm for analyzing and processing borehole strainmeter data

    USGS Publications Warehouse

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  2. Doomed to Drown? Sediment Dynamics, Infrastructure, and the Threat of Sea Level Rise in the Bengal Delta

    NASA Astrophysics Data System (ADS)

    Rogers, K. G.; Overeem, I.

    2017-12-01

    The Bengal Delta in Bangladesh is regularly described as a "delta in peril" of catastrophic coastal flooding. In order to maintain a positive surface elevation, sediment aggradation on the delta must be equal to or greater than that of local sea level rise. Paradoxically, widespread armoring of the delta by coastal embankments meant to protect crops from tidal flooding has limited fluvial floodplain deposition, leading to rapid compaction and lowered land surface levels. This renders the floodplains of the delta susceptible to devastating flooding by sea level rise and storm surges capable of breaching the poorly maintained embankments. The government of Bangladesh is currently considering a one-size-fits-all approach to renovating the embankments under the assumption that sediment dynamics in the delta are everywhere the same. However, natural physical processes are spatially variable across the delta front and therefore the impact of dikes on sediment dispersal and morphology should reflect these variations. Direct sedimentation measurements, short-lived radionuclides, and a simplified sediment routing model are used to show that transport processes and sedimentation rates are highly variable across the lower delta. Aggradation is more than double the rate of local sea level rise in some areas, and dominant modes of transport are reflected in the patterns of sediment routing and flux across the lower deltaplain, though embankments are major controls on sediment dynamics throughout the coastal delta. This challenges the assumption that the Bengal Delta is doomed to drown; rather it signifies that effective preparation for 21st century climate change requires consideration of spatially variable physical dynamics and local feedbacks with large-scale infrastructure.

  3. A New Formulation of Time Domain Boundary Integral Equation for Acoustic Wave Scattering in the Presence of a Uniform Mean Flow

    NASA Technical Reports Server (NTRS)

    Hu, Fang; Pizzo, Michelle E.; Nark, Douglas M.

    2017-01-01

    It has been well-known that under the assumption of a constant uniform mean flow, the acoustic wave propagation equation can be formulated as a boundary integral equation, in both the time domain and the frequency domain. Compared with solving partial differential equations, numerical methods based on the boundary integral equation have the advantage of a reduced spatial dimension and, hence, requiring only a surface mesh. However, the constant uniform mean flow assumption, while convenient for formulating the integral equation, does not satisfy the solid wall boundary condition wherever the body surface is not aligned with the uniform mean flow. In this paper, we argue that the proper boundary condition for the acoustic wave should not have its normal velocity be zero everywhere on the solid surfaces, as has been applied in the literature. A careful study of the acoustic energy conservation equation is presented that shows such a boundary condition in fact leads to erroneous source or sink points on solid surfaces not aligned with the mean flow. A new solid wall boundary condition is proposed that conserves the acoustic energy and a new time domain boundary integral equation is derived. In addition to conserving the acoustic energy, another significant advantage of the new equation is that it is considerably simpler than previous formulations. In particular, tangential derivatives of the solution on the solid surfaces are no longer needed in the new formulation, which greatly simplifies numerical implementation. Furthermore, stabilization of the new integral equation by Burton-Miller type reformulation is presented. The stability of the new formulation is studied theoretically as well as numerically by an eigenvalue analysis. Numerical solutions are also presented that demonstrate the stability of the new formulation.

  4. A new approach to cosmogenic corrections in 40Ar/ 39Ar chronometry: Implications for the ages of Martian meteorites

    DOE PAGES

    Cassata, W. S.; Borg, L. E.

    2016-05-04

    Anomalously old 40Ar/ 39Ar ages are commonly obtained from Shergottites and are generally attributed to uncertainties regarding the isotopic composition of the trapped component and/or the presence of excess 40Ar. Old ages can also be obtained if inaccurate corrections for cosmogenic 36Ar are applied. Current methods for making the cosmogenic correction require simplifying assumptions regarding the spatial homogeneity of target elements for cosmogenic production and the distribution of cosmogenic nuclides relative to trapped and reactor-derived Ar isotopes. To mitigate uncertainties arising from these assumptions, a new cosmogenic correction approach utilizing the exposure age determined on an un-irradiated aliquot and step-wisemore » production rate estimates that account for spatial variations in Ca and K is described. Data obtained from NWA 4468 and an unofficial pairing of NWA 2975, which yield anomalously old ages when corrected for cosmogenic 36Ar using conventional techniques, are used to illustrate the efficacy of this new approach. For these samples, anomalous age determinations are rectified solely by the improved cosmogenic correction technique described herein. Ages of 188 ± 17 and 184 ± 17 Ma are obtained for NWA 4468 and NWA 2975, respectively, both of which are indistinguishable from ages obtained by other radioisotopic systems. For other Shergottites that have multiple trapped components, have experienced diffusive loss of Ar, or contain excess Ar, more accurate cosmogenic corrections may aid in the interpretation of anomalous ages. In conclusion, the trapped 40Ar/ 36Ar ratios inferred from inverse isochron diagrams obtained from NWA 4468 and NWA 2975 are significantly lower than the Martian atmospheric value, and may represent upper mantle or crustal components.« less

  5. Are We Ready for Real-world Neuroscience?

    PubMed

    Matusz, Pawel J; Dikker, Suzanne; Huth, Alexander G; Perrodin, Catherine

    2018-06-19

    Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.

  6. Dynamics and control of infections on social networks of population types.

    PubMed

    Williams, Brian G; Dye, Christopher

    2018-06-01

    Random mixing in host populations has been a convenient simplifying assumption in the study of epidemics, but neglects important differences in contact rates within and between population groups. For HIV/AIDS, the assumption of random mixing is inappropriate for epidemics that are concentrated in groups of people at high risk, including female sex workers (FSW) and their male clients (MCF), injecting drug users (IDU) and men who have sex with men (MSM). To find out who transmits infection to whom and how that affects the spread and containment of infection remains a major empirical challenge in the epidemiology of HIV/AIDS. Here we develop a technique, based on the routine sampling of infection in linked population groups (a social network of population types), which shows how an HIV/AIDS epidemic in Can Tho Province of Vietnam began in FSW, was propagated mainly by IDU, and ultimately generated most cases among the female partners of MCF (FPM). Calculation of the case reproduction numbers within and between groups, and for the whole network, provides insights into control that cannot be deduced simply from observations on the prevalence of infection. Specifically, the per capita rate of HIV transmission was highest from FSW to MCF, and most HIV infections occurred in FPM, but the number of infections in the whole network is best reduced by interrupting transmission to and from IDU. This analysis can be used to guide HIV/AIDS interventions using needle and syringe exchange, condom distribution and antiretroviral therapy. The method requires only routine data and could be applied to infections in other populations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  8. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  9. Simplified filtered Smith predictor for MIMO processes with multiple time delays.

    PubMed

    Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E

    2016-11-01

    This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Evolution of enzymes in a series is driven by dissimilar functional demands.

    PubMed

    Salvador, Armindo; Savageau, Michael A

    2006-02-14

    That distinct enzyme activities in an unbranched metabolic pathway are evolutionarily tuned to a single functional requirement is a pervasive assumption. Here we test this assumption by examining the activities of two consecutively acting enzymes in human erythrocytes with an approach to quantitative evolutionary design that avoids the above-mentioned assumption. We previously found that avoidance of NADPH depletion during the pulses of oxidative load to which erythrocytes are normally exposed is the main functional requirement mediating selection for high glucose-6-phosphate dehydrogenase activity. In the present study, we find that, in contrast, the maintenance of oxidized glutathione at low concentrations is the main functional requirement mediating selection for high glutathione reductase activity. The results in this case show that, contrary to the assumption of a single functional requirement, natural selection for the normal activities of the distinct enzymes in the pathway is mediated by different requirements. On the other hand, the results agree with the more general principles that underlie our approach. Namely, that (i) the values of biochemical parameters evolve so as to fulfill the various performance requirements that are relevant to achieve high fitness, and (ii) these performance requirements can be inferred from quantitative systems theory considerations, informed by knowledge of specific aspects of the biochemistry, physiology, genetics, and ecology of the organism.

  11. Ion beam probing of electrostatic fields

    NASA Technical Reports Server (NTRS)

    Persson, H.

    1979-01-01

    The determination of a cylindrically symmetric, time-independent electrostatic potential V in a magnetic field B with the same symmetry by measurements of the deflection of a primary beam of ions is analyzed and substantiated by examples. Special attention is given to the requirements on canonical angular momentum and total energy set by an arbitrary, nonmonotone V, to scaling laws obtained by normalization, and to the analogy with ionospheric sounding. The inversion procedure with the Abel analysis of an equivalent problem with a one-dimensional fictitious potential is used in a numerical experiment with application to the NASA Lewis Modified Penning Discharge. The determination of V from a study of secondary beams of ions with increased charge produced by hot plasma electrons is also analyzed, both from a general point of view and with application to the NASA Lewis SUMMA experiment. Simple formulas and geometrical constructions are given for the minimum energy necessary to reach the axis, the whole plasma, and any point in the magnetic field. The common, simplifying assumption that V is a small perturbation is critically and constructively analyzed; an iteration scheme for successively correcting the orbits and points of ionization for the electrostatic potential is suggested.

  12. Electronic excitations and their effect on the interionic forces in simulations of radiation damage in metals.

    PubMed

    Race, C P; Mason, D R; Sutton, A P

    2009-03-18

    Using time-dependent tight-binding simulations of radiation damage cascades in a model metal we directly investigate the nature of the excitations of a system of quantum mechanical electrons in response to the motion of a set of classical ions. We furthermore investigate the effect of these excitations on the attractive electronic forces between the ions. We find that the electronic excitations are well described by a Fermi-Dirac distribution at some elevated temperature, even in the absence of the direct electron-electron interactions that would be required in order to thermalize a non-equilibrium distribution. We explain this result in terms of the spectrum of characteristic frequencies of the ionic motion. Decomposing the electronic force into four well-defined components within the basis of instantaneous electronic eigenstates, we find that the effect of accumulated excitations in weakening the interionic bonds is mostly (95%) accounted for by a thermal model for the electronic excitations. This result justifies the use of the simplifying assumption of a thermalized electron system in simulations of radiation damage with an electronic temperature dependence and in the development of temperature-dependent classical potentials.

  13. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  14. Simulation of friction stir drilling process

    NASA Astrophysics Data System (ADS)

    Vijayabaskar, P.; Hynes, N. Rajesh Jesudoss

    2018-05-01

    The project is the study of the thermal drilling process. The process is a hole forming process in the sheet metals using the heat generated by means of friction. The main advantage of the process over the conventional drilling process is that the holes formed using this process does not need any backing arrangements such as weld nuts, rivet nuts etc. Because the extruded bush itself acts as a supporting structure for the fasteners. This eliminates the need for the access to the backside of the work material for fastening operations. The major factors contributing the thermal drilling operation are the spindle speed and the thrust force required for forming a hole. The process of finding out the suitable thrust force and the speed for drilling a particular material with particular thickness is a tedious process. The process can be simplified by forming a mathematical model by combining the empirical formulae from the literature. These formulae were derived in the literature from the experimental trials by following certain assumptions. In this paper a suitable mathematical model is formed by replicating the experiments and tried to be validated by the results from numerical analysis. The numerical analysis of the model is done using the ANSYS software.

  15. Coronal Physics and the Chandra Emission Line Project

    NASA Technical Reports Server (NTRS)

    Brickhouse, N. S.; Drake, J. J.

    2000-01-01

    With the launch of the Chandra X-ray Observatory, high resolution X-ray spectroscopy of cosmic sources has begun. Early, deep observations of three stellar coronal sources Capella, Procyon, and HR 1099 are providing not only invaluable calibration data, but also benchmarks for plasma spectral models. These models are needed to interpret data from stellar coronae, galaxies and clusters of galaxies, supernova, remnants and other astrophysical sources. They have been called into question in recent years as problems with understanding low resolution ASCA and moderate resolution Extreme Ultraviolet Explorer Satellite (EUVE) data have arisen. The Emission Line Project is a collaborative effort, to improve the models, with Phase I being the comparison of models with observed spectra of Capella, Procyon, and HR 1099. Goals of these comparisons are (1) to determine and verify accurate and robust diagnostics and (2) to identify and prioritize issues in fundamental spectroscopy which will require further theoretical and/or laboratory work. A critical issue in exploiting the coronal data for these purposes is to understand the extent, to which common simplifying assumptions (coronal equilibrium, negligible optical depth) apply. We will discuss recent, advances in our understanding of stellar coronae, in this context.

  16. Coronal Physics and the Chandra Emission Line Project

    NASA Technical Reports Server (NTRS)

    Brickhouse, Nancy

    1999-01-01

    With the launch of the Chandra X-ray Observatory, high resolution X-ray spectroscopy of cosmic sources has begun. Early, deep observations of three stellar coronal sources will provide not only invaluable calibration data, but will also give us benchmarks for plasma spectral modeling codes. These codes are to interpret data from stellar coronae, galaxies and clusters of galaxies. supernova remnants and other astrophysical sources, but they have been called into question in recent years as problems with understanding moderate resolution ASCA and EUVE data have arisen. The Emission Line Project is a collaborative effort to improve the models, with Phase 1 being the comparison of models with observed spectra of Capella, Procyon, and HR, 1099. Goals of these comparisons are (1) to determine and verify accurate and robust diagnostics and (2) to identify and prioritize issues in fundamental spectroscopy which will require further theoretical and/or laboratory work. A critical issue in exploiting the coronal data for these purposes is to understand the extent to which common simplifying assumptions (coronal equilibrium, time-independence, negligible optical depth) apply. We will discuss recent advances in our understanding of stellar coronae in this context.

  17. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation

    PubMed Central

    Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong

    2017-01-01

    Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567

  18. Numerical model for the thermal behavior of thermocline storage tanks

    NASA Astrophysics Data System (ADS)

    Ehtiwesh, Ismael A. S.; Sousa, Antonio C. M.

    2018-03-01

    Energy storage is a critical factor in the advancement of solar thermal power systems for the sustained delivery of electricity. In addition, the incorporation of thermal energy storage into the operation of concentrated solar power systems (CSPs) offers the potential of delivering electricity without fossil-fuel backup even during peak demand, independent of weather conditions and daylight. Despite this potential, some areas of the design and performance of thermocline systems still require further attention for future incorporation in commercial CSPs, particularly, their operation and control. Therefore, the present study aims to develop a simple but efficient numerical model to allow the comprehensive analysis of thermocline storage systems aiming better understanding of their dynamic temperature response. The validation results, despite the simplifying assumptions of the numerical model, agree well with the experiments for the time evolution of the thermocline region. Three different cases are considered to test the versatility of the numerical model; for the particular type of a storage tank with top round impingement inlet, a simple analytical model was developed to take into consideration the increased turbulence level in the mixing region. The numerical predictions for the three cases are in general good agreement against the experimental results.

  19. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation.

    PubMed

    Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai

    2017-12-01

    Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.

  1. An improved approach of register allocation via graph coloring

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Shi, Ce

    2005-03-01

    Register allocation is an important part of optimizing compiler. The algorithm of register allocation via graph coloring is implemented by Chaitin and his colleagues firstly and improved by Briggs and others. By abstracting register allocation to graph coloring, the allocation process is simplified. As the physical register number is limited, coloring of the interference graph can"t succeed for every node. The uncolored nodes must be spilled. There is an assumption that almost all the allocation method obeys: when a register is allocated to a variable v, it can"t be used by others before v quit even if v is not used for a long time. This may causes a waste of register resource. The authors relax this restriction under certain conditions and make some improvement. In this method, one register can be mapped to two or more interfered "living" live ranges at the same time if they satisfy some requirements. An operation named merge is defined which can arrange two interfered nodes occupy the same register with some cost. Thus, the resource of register can be used more effectively and the cost of memory access can be reduced greatly.

  2. Modeling Electronic Skin Response to Normal Distributed Force

    PubMed Central

    Seminara, Lucia

    2018-01-01

    The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq’s half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology. PMID:29401692

  3. Modeling Electronic Skin Response to Normal Distributed Force.

    PubMed

    Seminara, Lucia

    2018-02-03

    The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq's half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology.

  4. Benchmarked analyses of gamma skyshine using MORSE-CGA-PC and the DABL69 cross-section set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reichert, P.T.; Golshani, M.

    1991-01-01

    Design for gamma-ray skyshine is a common consideration for a variety of nuclear and accelerator facilities. Many of these designs can benefit from a more accurate and complete treatment than can be provided by simple skyshine analysis tools. Those methods typically require a number of conservative, simplifying assumptions in modeling the radiation source and shielding geometry. This paper considers the benchmarking of one analytical option. The MORSE-CGA Monte Carlo radiation transport code system provides the capability for detailed treatment of virtually any source and shielding geometry. Unfortunately, the mainframe computer costs of MORSE-CGA analyses can prevent cost-effective application to smallmore » projects. For this reason, the MORSE-CGA system was converted to run on IBM personal computer (PC)-compatible computers using the Intel 80386 or 80486 microprocessors. The DLC-130/DABL69 cross-section set (46n,23g) was chosen as the most suitable, readily available, broad-group library. The most important reason is the relatively high (P{sub 5}) Legendre order of expansion for angular distribution. This is likely to be beneficial in the deep-penetration conditions modeled in some skyshine problems.« less

  5. Second stop and sbottom searches with a stealth stop

    NASA Astrophysics Data System (ADS)

    Cheng, Hsin-Chia; Li, Lingfeng; Qin, Qin

    2016-11-01

    The top squarks (stops) may be the most wanted particles after the Higgs boson discovery. The searches for the lightest stop have put strong constraints on its mass. However, there is still a search gap in the low mass region if the spectrum of the stop and the lightest neutralino is compressed. In that case, it may be easier to look for the second stop since naturalness requires both stops to be close to the weak scale. The current experimental searches for the second stop are based on the simplified model approach with the decay modes {overset{˜ }{t}}_2to {overset{˜ }{t}}_1Z and {overset{˜ }{t}}_2to {overset{˜ }{t}}_1h . However, in a realistic supersymmetric spectrum there is always a sbottom lighter than the second stop, hence the decay patterns are usually more complicated than the simplified model assumptions. In particular, there are often large branching ratios of the decays {overset{˜ }{t}}_2to {overset{˜ }{b}}_1W and {overset{˜ }{b}}_1to {overset{˜ }{t}}_1W as long as they are open. The decay chains can be even more complex if there are intermediate states of additional charginos and neutralinos in the decays. By studying several MSSM benchmark models at the 14 TeV LHC, we point out the importance of the multi- W final states in the second stop and the sbottom searches, such as the same-sign dilepton and multilepton signals, aside from the traditional search modes. The observed same-sign dilepton excesses at LHC Run 1 and Run 2 may be explained by some of our benchmark models. We also suggest that the vector boson tagging and a new kinematic variable may help to suppress the backgrounds and increase the signal significance for some search channels. Due to the complex decay patterns and lack of the dominant decay channels, the best reaches likely require a combination of various search channels at the LHC for the second stop and the lightest sbottom.

  6. 48 CFR 619.803-71 - Simplified procedures for 8(a) acquisitions under MOUs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... activities may use the simplified acquisition procedures of FAR part 13 and DOSAR part 613 to issue purchase orders or contracts, not exceeding $100,000, to 8(a) participants. The $100,000 limitation for use of FAR... letters from, the SBA are required. (b) The contracting activity shall use the Central Contractor...

  7. Simplified thermodynamic functions for vapor-liquid phase separation and fountain effect pumps

    NASA Technical Reports Server (NTRS)

    Yuan, S. W. K.; Hepler, W. A.; Frederking, T. H. K.

    1984-01-01

    He-4 fluid handling devices near 2 K require novel components for non-Newtonian fluid transport in He II. Related sizing of devices has to be based on appropriate thermophysical property functions. The present paper presents simplified equilibrium state functions for porous media components which serve as vapor-liquid phase separators and fountain effect pumps.

  8. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    NASA Technical Reports Server (NTRS)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  9. Delayed ripple counter simplifies square-root computation

    NASA Technical Reports Server (NTRS)

    Cliff, R.

    1965-01-01

    Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.

  10. A Simplified Diagnostic Method for Elastomer Bond Durability

    NASA Technical Reports Server (NTRS)

    White, Paul

    2009-01-01

    A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.

  11. 38 CFR 36.4209 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and Department of Veterans Affairs regulations. (A) If the assumption is approved and the transfer of... of the executed deed, bill of sale, transfer of equity agreement, and/or assumption agreement as... disapproval decision. If the application for assumption is approved and the transfer of the security is...

  12. Influence of thermal and velocity slip on the peristaltic flow of Cu-water nanofluid with magnetic field

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher

    2016-03-01

    The peristaltic flow of an incompressible viscous fluid containing copper nanoparticles in an asymmetric channel is discussed with thermal and velocity slip effects. The copper nanoparticles for the peristaltic flow water as base fluid is not explored so far. The equations for the purposed fluid model are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been calculated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The influence of various flow parameters on the flow and heat transfer characteristics is obtained.

  13. Metachronal wave analysis for non-Newtonian fluid under thermophoresis and Brownian motion effects

    NASA Astrophysics Data System (ADS)

    Shaheen, A.; Nadeem, S.

    This paper analyse the mathematical model of ciliary motion in an annulus. The effect of convective heat transfer and nanoparticle are taken into account. The governing equations of Jeffrey six-constant fluid along with heat and nanoparticle are modelled and then simplified by using long wavelength and low Reynolds number assumptions. The reduced equations are solved with the help of homotopy perturbation method. The obtained expressions for the velocity, temperature and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves. Streamlines has also been plotted at the last part of the paper.

  14. Magnetic field effects for copper suspended nanofluid venture through a composite stenosed arteries with permeable wall

    NASA Astrophysics Data System (ADS)

    Akbar, Noreen Sher; Butt, Adil Wahid

    2015-05-01

    In the present paper magnetic field effects for copper nanoparticles for blood flow through composite stenosis in arteries with permeable wall are discussed. The copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The effect of various flow parameters on the flow and heat transfer characteristics is utilized.

  15. The span as a fundamental factor in airplane design

    NASA Technical Reports Server (NTRS)

    Lachmann, G

    1928-01-01

    Previous theoretical investigations of steady curvilinear flight did not afford a suitable criterion of "maneuverability," which is very important for judging combat, sport and stunt-flying airplanes. The idea of rolling ability, i.e., of the speed of rotation of the airplane about its X axis in rectilinear flight at constant speed and for a constant, suddenly produced deflection of the ailerons, is introduced and tested under simplified assumptions for the air-force distribution over the span. This leads to the following conclusions: the effect of the moment of inertia about the X axis is negligibly small, since the speed of rotation very quickly reaches a uniform value.

  16. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  17. Actin-based propulsion of a microswimmer.

    PubMed

    Leshansky, A M

    2006-07-01

    A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.

  18. Theoretical analysis of oxygen diffusion at startup in an alkali metal heat pipe with gettered alloy walls

    NASA Technical Reports Server (NTRS)

    Tower, L. K.

    1973-01-01

    The diffusion of oxygen into, or out of, a gettered alloy exposed to oxygenated alkali liquid metal coolant, a situation arising in some high temperature heat transfer systems, was analyzed. The relation between the diffusion process and the thermochemistry of oxygen in the alloy and in the alkali metal was developed by making several simplifying assumptions. The treatment is therefore theoretical in nature. However, a practical example pertaining to the startup of a heat pipe with walls of T-111, a tantalum alloy, and lithium working fluid illustrates the use of the figures contained in the analysis.

  19. Combined effects of heat and mass transfer to magneto hydrodynamics oscillatory dusty fluid flow in a porous channel

    NASA Astrophysics Data System (ADS)

    Govindarajan, A.; Vijayalakshmi, R.; Ramamurthy, V.

    2018-04-01

    The main aim of this article is to study the combined effects of heat and mass transfer to radiative Magneto Hydro Dynamics (MHD) oscillatory optically thin dusty fluid in a saturated porous medium channel. Based on certain assumptions, the momentum, energy, concentration equations are obtained.The governing equations are non-dimensionalised, simplified and solved analytically. The closed analytical form solutions for velocity, temperature, concentration profiles are obtained. Numerical computations are presented graphically to show the salient features of various physical parameters. The shear stress, the rate of heat transfer and the rate of mass transfer are also presented graphically.

  20. Efficiency gain from elastic optical networks

    NASA Astrophysics Data System (ADS)

    Morea, Annalisa; Rival, Olivier

    2011-12-01

    We compare the cost-efficiency of optical networks based on mixed datarates (10, 40, 100Gb/s) and datarateelastic technologies. A European backbone network is examined under various traffic assumptions (volume of transported data per demand and total number of demands) to better understand the impact of traffic characteristics on cost-efficiency. Network dimensioning is performed for static and restorable networks (resilient to one-link failure). In this paper we will investigate the trade-offs between price of interfaces, reach and reconfigurability, showing that elastic solutions can be more cost-efficient than mixed-rate solutions because of the better compatibility between different datarates, increased reach of channels and simplified wavelength allocation.

  1. A Module Language for Typing by Contracts

    NASA Technical Reports Server (NTRS)

    Glouche, Yann; Talpin, Jean-Pierre; LeGuernic, Paul; Gautier, Thierry

    2009-01-01

    Assume-guarantee reasoning is a popular and expressive paradigm for modular and compositional specification of programs. It is becoming a fundamental concept in some computer-aided design tools for embedded system design. In this paper, we elaborate foundations for contract-based embedded system design by proposing a general-purpose module language based on a Boolean algebra allowing to define contracts. In this framework, contracts are used to negotiate the correctness of assumptions made on the definition of a component at the point where it is used and provides guarantees to its environment. We illustrate this presentation with the specification of a simplified 4-stroke engine model.

  2. Centrifugal inertia effects in two-phase face seal films

    NASA Technical Reports Server (NTRS)

    Basu, P.; Hughes, W. F.; Beeler, R. M.

    1987-01-01

    A simplified, semianalytical model has been developed to analyze the effect of centrifugal inertia in two-phase face seals. The model is based on the assumption of isothermal flow through the seal, but at an elevated temperature, and takes into account heat transfer and boiling. Using this model, seal performance curves are obtained with water as the working fluid. It is shown that the centrifugal inertia of the fluid reduces the load-carrying capacity dramatically at high speeds and that operational instability exists under certain conditions. While an all-liquid seal may be starved at speeds higher than a 'critical' value, leakage always occurs under boiling conditions.

  3. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  4. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  5. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  6. 7 CFR 1980.366 - Transfer and assumption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 14 2012-01-01 2012-01-01 false Transfer and assumption. 1980.366 Section 1980.366...) PROGRAM REGULATIONS (CONTINUED) GENERAL Rural Housing Loans § 1980.366 Transfer and assumption. (a) General. Lenders may, but are not required to, permit a transfer to an eligible applicant. A transfer and...

  7. Optimization and experimental realization of the quantum permutation algorithm

    NASA Astrophysics Data System (ADS)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  8. A Simplified Technique for Implant-Abutment Level Impression after Soft Tissue Adaptation around Provisional Restoration

    PubMed Central

    Kutkut, Ahmad; Abu-Hammad, Osama; Frazer, Robert

    2016-01-01

    Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations. PMID:29563457

  9. 14 CFR 171.109 - Performance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...

  10. 14 CFR 171.109 - Performance requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...

  11. 14 CFR 171.109 - Performance requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...

  12. 14 CFR 171.109 - Performance requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...

  13. Analysis of temperature distribution in liquid-cooled turbine blades

    NASA Technical Reports Server (NTRS)

    Livingood, John N B; Brown, W Byron

    1952-01-01

    The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  15. Exploring Unidimensional Proficiency Classification Accuracy from Multidimensional Data in a Vertical Scaling Context

    ERIC Educational Resources Information Center

    Kroopnick, Marc Howard

    2010-01-01

    When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…

  16. On the combinatorics of sparsification.

    PubMed

    Huang, Fenix Wd; Reidys, Christian M

    2012-10-22

    We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.

  17. Kinetic multi-layer model of aerosol surface and bulk chemistry (KM-SUB): the influence of interfacial transport and bulk diffusion on the oxidation of oleic acid by ozone

    NASA Astrophysics Data System (ADS)

    Shiraiwa, Manabu; Pfrang, Christian; Pöschl, Ulrich

    2010-05-01

    Aerosols are ubiquitous in the atmosphere and have strong effects on climate and public health. Gas-particle interactions can significantly change the physical and chemical properties of aerosols such as toxicity, reactivity, hygroscopicity and radiative properties. Chemical reactions and mass transport lead to continuous transformation and changes in the composition of atmospheric aerosols ("chemical aging"). Resistor model formulations are widely used to describe and investigate heterogeneous reactions and multiphase processes in laboratory, field and model studies of atmospheric chemistry. The traditional resistor models, however, are usually based on simplifying assumptions such as steady state conditions, homogeneous mixing, and limited numbers of non-interacting species and processes. In order to overcome these limitations, Pöschl, Rudich and Ammann have developed a kinetic model framework (PRA framework) with a double-layer surface concept and universally applicable rate equations and parameters for mass transport and chemical reactions at the gas-particle interface of aerosols and clouds [1]. Based on the PRA framework, we present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB) [2]. The model includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical life-times of multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (~10-10 cm2 s-1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models. References [1] Pöschl et al., Atmos. Chem. and Phys., 7, 5989-6023 (2007). [2] Shiraiwa et al., Atmos. Chem. Phys. Discuss., 10, 281-326 (2010).

  18. Multiscale Molecular Dynamics Model for Heterogeneous Charged Systems

    NASA Astrophysics Data System (ADS)

    Stanton, L. G.; Glosli, J. N.; Murillo, M. S.

    2018-04-01

    Modeling matter across large length scales and timescales using molecular dynamics simulations poses significant challenges. These challenges are typically addressed through the use of precomputed pair potentials that depend on thermodynamic properties like temperature and density; however, many scenarios of interest involve spatiotemporal variations in these properties, and such variations can violate assumptions made in constructing these potentials, thus precluding their use. In particular, when a system is strongly heterogeneous, most of the usual simplifying assumptions (e.g., spherical potentials) do not apply. Here, we present a multiscale approach to orbital-free density functional theory molecular dynamics (OFDFT-MD) simulations that bridges atomic, interionic, and continuum length scales to allow for variations in hydrodynamic quantities in a consistent way. Our multiscale approach enables simulations on the order of micron length scales and 10's of picosecond timescales, which exceeds current OFDFT-MD simulations by many orders of magnitude. This new capability is then used to study the heterogeneous, nonequilibrium dynamics of a heated interface characteristic of an inertial-confinement-fusion capsule containing a plastic ablator near a fuel layer composed of deuterium-tritium ice. At these scales, fundamental assumptions of continuum models are explored; features such as the separation of the momentum fields among the species and strong hydrogen jetting from the plastic into the fuel region are observed, which had previously not been seen in hydrodynamic simulations.

  19. The determination of some requirements for a helicopter flight research simulation facility

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.

    1977-01-01

    Important requirements were defined for a flight simulation facility to support Army helicopter development. In particular requirements associated with the visual and motion subsystems of the planned simulator were studied. The method used in the motion requirements study is presented together with the underlying assumptions and a description of the supporting data. Results are given in a form suitable for use in a preliminary design. Visual requirements associated with a television camera/model concept are related. The important parameters are described together with substantiating data and assumptions. Research recommendations are given.

  20. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  1. Fuels for urban transit buses: a cost-effectiveness analysis.

    PubMed

    Cohen, Joshua T; Hammitt, James K; Levy, Jonathan I

    2003-04-15

    Public transit agencies have begun to adopt alternative propulsion technologies to reduce urban transit bus emissions associated with conventional diesel (CD) engines. Among the most popular alternatives are emission controlled diesel buses (ECD), defined here to be buses with continuously regenerating diesel particle filters burning low-sulfur diesel fuel, and buses burning compressed natural gas (CNG). This study uses a series of simplifying assumptions to arrive at first-order estimates for the incremental cost-effectiveness (CE) of ECD and CNG relative to CD. The CE ratio numerator reflects acquisition and operating costs. The denominator reflects health losses (mortality and morbidity) due to primary particulate matter (PM), secondary PM, and ozone exposure, measured as quality adjusted life years (QALYs). We find that CNG provides larger health benefits than does ECD (nine vs six QALYs annually per 1000 buses) but that ECD is more cost-effective than CNG (dollar 270 000 per QALY for ECD vs dollar 1.7 million to dollar 2.4 million for CNG). These estimates are subject to much uncertainty. We identify assumptions that contribute most to this uncertainty and propose potential research directions to refine our estimates.

  2. Measuring the diffusion of linguistic change

    PubMed Central

    Nerbonne, John

    2010-01-01

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335–357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question. PMID:21041207

  3. NASA's Integrated Instrument Simulator Suite for Atmospheric Remote Sensing from Spaceborne Platforms (ISSARS) and Its Role for the ACE and GPM Missions

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn; hide

    2011-01-01

    Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.

  4. Measuring the diffusion of linguistic change.

    PubMed

    Nerbonne, John

    2010-12-12

    We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335-357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question.

  5. Gas Near a Wall: Shortened Mean Free Path, Reduced Viscosity, and the Manifestation of the Knudsen Layer in the Navier-Stokes Solution of a Shear Flow

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail V.

    2018-06-01

    For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. Under the simplifying assumption of constant temperature of the gas, the velocity profile becomes an explicit nonlinear function of the distance from the wall and exhibits a Knudsen boundary layer near the wall. To verify the validity of the obtained formula, we perform the Direct Simulation Monte Carlo computations for the shear flow of argon and nitrogen at normal density and temperature. We find excellent agreement between our velocity approximation and the computed DSMC velocity profiles both within the Knudsen boundary layer and away from it.

  6. Multi-Destination and Multi-Purpose Trip Effects in the Analysis of the Demand for Trips to a Remote Recreational Site

    NASA Astrophysics Data System (ADS)

    Martínez-Espiñeira, Roberto; Amoako-Tuffour, Joe

    2009-06-01

    One of the basic assumptions of the travel cost method for recreational demand analysis is that the travel cost is always incurred for a single purpose recreational trip. Several studies have skirted around the issue with simplifying assumptions and dropping observations considered as nonconventional holiday-makers or as nontraditional visitors from the sample. The effect of such simplifications on the benefit estimates remains conjectural. Given the remoteness of notable recreational parks, multi-destination or multi-purpose trips are not uncommon. This article examines the consequences of allocating travel costs to a recreational site when some trips were taken for purposes other than recreation and/or included visits to other recreational sites. Using a multi-purpose weighting approach on data from Gros Morne National Park, Canada, we conclude that a proper correction for multi-destination or multi-purpose trip is more of what is needed to avoid potential biases in the estimated effects of the price (travel-cost) variable and of the income variable in the trip generation equation.

  7. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  8. Space-time codependence of retinal ganglion cells can be explained by novel and separable components of their receptive fields.

    PubMed

    Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M

    2016-09-01

    Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  9. 48 CFR 46.301 - Contractor inspection requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... simplified acquisition threshold and (a) inclusion of the clause is necessary to ensure an explicit understanding of the contractor's inspection responsibilities, or (b) inclusion of the clause is required under...

  10. 48 CFR 46.301 - Contractor inspection requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... simplified acquisition threshold and (a) inclusion of the clause is necessary to ensure an explicit understanding of the contractor's inspection responsibilities, or (b) inclusion of the clause is required under...

  11. 48 CFR 46.301 - Contractor inspection requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... simplified acquisition threshold and (a) inclusion of the clause is necessary to ensure an explicit understanding of the contractor's inspection responsibilities, or (b) inclusion of the clause is required under...

  12. 48 CFR 46.301 - Contractor inspection requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... simplified acquisition threshold and (a) inclusion of the clause is necessary to ensure an explicit understanding of the contractor's inspection responsibilities, or (b) inclusion of the clause is required under...

  13. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...

  14. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...

  15. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  16. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  17. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).

  18. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  19. High Order Schemes in BATS-R-US: Is it OK to Simplify Them?

    NASA Astrophysics Data System (ADS)

    Tóth, G.; Chen, Y.; van der Holst, B.; Daldorff, L. K. S.

    2014-09-01

    We describe a number of high order schemes and their simplified variants that have been implemented into the University of Michigan global magnetohydrodynamics code BATS-R-US. We compare the various schemes with each other and the legacy 2nd order TVD scheme for various test problems and two space physics applications. We find that the simplified schemes are often quite competitive with the more complex and expensive full versions, despite the fact that the simplified versions are only high order accurate for linear systems of equations. We find that all the high order schemes require some fixes to ensure positivity in the space physics applications. On the other hand, they produce superior results as compared with the second order scheme and/or produce the same quality of solution at a much reduced computational cost.

  20. Space Fabrication Demonstration System

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Progress on fabrication facility (beam builder) support structure control, clamp/weld block, and welding and truss cut off is discussed. The brace attachment design was changed and the design of the weld mechanism was modified which achieved the following system benefits: (1) simplified weld electrode life; (2) reduced weld power requirements; and (3) simplified brace attachment mechanisms. Static and fatigue characteristics of spot welded 2024T3 aluminum joints are evaluated.

  1. Robust Data Detection for the Photon-Counting Free-Space Optical System With Implicit CSI Acquisition and Background Radiation Compensation

    NASA Astrophysics Data System (ADS)

    Song, Tianyu; Kam, Pooi-Yuen

    2016-02-01

    Since atmospheric turbulence and pointing errors cause signal intensity fluctuations and the background radiation surrounding the free-space optical (FSO) receiver contributes an undesired noisy component, the receiver requires accurate channel state information (CSI) and background information to adjust the detection threshold. In most previous studies, for CSI acquisition, pilot symbols were employed, which leads to a reduction of spectral and energy efficiency; and an impractical assumption that the background radiation component is perfectly known was made. In this paper, we develop an efficient and robust sequence receiver, which acquires the CSI and the background information implicitly and requires no knowledge about the channel model information. It is robust since it can automatically estimate the CSI and background component and detect the data sequence accordingly. Its decision metric has a simple form and involves no integrals, and thus can be easily evaluated. A Viterbi-type trellis-search algorithm is adopted to improve the search efficiency, and a selective-store strategy is adopted to overcome a potential error floor problem as well as to increase the memory efficiency. To further simplify the receiver, a decision-feedback symbol-by-symbol receiver is proposed as an approximation of the sequence receiver. By simulations and theoretical analysis, we show that the performance of both the sequence receiver and the symbol-by-symbol receiver, approach that of detection with perfect knowledge of the CSI and background radiation, as the length of the window for forming the decision metric increases.

  2. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  3. simplified aerosol representations in global modeling

    NASA Astrophysics Data System (ADS)

    Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip

    2015-04-01

    The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.

  4. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  5. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  6. Ionic transport in high-energy-density matter

    DOE PAGES

    Stanton, Liam G.; Murillo, Michael S.

    2016-04-08

    Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less

  7. Assessment of historical masonry pillars reinforced by CFRP strips

    NASA Astrophysics Data System (ADS)

    Fedele, Roberto; Rosati, Giampaolo; Biolzi, Luigi; Cattaneo, Sara

    2014-10-01

    In this methodological study, the ultimate response of masonry pillars strengthened by externally bonded Carbon Fiber Reinforced Polymer (CFRP) was investigated. Historical bricks were derived from a XVII century rural building, whilst a high strength mortar was utilized for the joints. The conventional experimental information, concerning the overall reaction force and relative displacements provided by "point" sensors (LVDTs and clip gauge), were herein enriched with no-contact, full-field kinematic measurements provided by 2D Digital Image Correlation (2D DIC). Experimental information were critically compared with prediction provided by an advanced three-dimensional models, based on nonlinear finite elements under the simplifying assumption of perfect adhesion between the reinforcement and the support.

  8. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  9. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  10. Managed care for Medicare: some considerations in designing effective information provision programs.

    PubMed

    Jayanti, R K

    2001-01-01

    Consumer information-processing theory provides a useful framework for policy makers concerned with regulating information provided by managed care organizations. The assumption that consumers are rational information processors and providing more information is better is questioned in this paper. Consumer research demonstrates that when faced with an uncertain decision, consumers adopt simplifying strategies leading to sub-optimal choices. A discussion on how consumers process risk information and the effects of various informational formats on decision outcomes is provided. Categorization theory is used to propose guidelines with regard to providing effective information to consumers choosing among competing managed care plans. Public policy implications borne out of consumer information-processing theory conclude the article.

  11. A modified Friedmann equation

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Watabiki, Y.

    2017-12-01

    We recently formulated a model of the universe based on an underlying W3-symmetry. It allows the creation of the universe from nothing and the creation of baby universes and wormholes for spacetimes of dimension 2, 3, 4, 6 and 10. Here we show that the classical large time and large space limit of these universes is one of exponential fast expansion without the need of a cosmological constant. Under a number of simplifying assumptions, our model predicts that w = ‑1.2 in the case of four-dimensional spacetime. The possibility of obtaining a w-value less than ‑1 is linked to the ability of our model to create baby universes and wormholes.

  12. Towards realistic modelling of spectral line formation - lessons learnt from red giants

    NASA Astrophysics Data System (ADS)

    Lind, Karin

    2015-08-01

    Many decades of quantitative spectroscopic studies of red giants have revealed much about the formation histories and interlinks between the main components of the Galaxy and its satellites. Telescopes and instrumentation are now able to deliver high-resolution data of superb quality for large stellar samples and Galactic archaeology has entered a new era. At the same time, we have learnt how simplifying physical assumptions in the modelling of spectroscopic data can bias the interpretations, in particular one-dimensional homogeneity and local thermodynamic equilibrium (LTE). I will present lessons learnt so far from non-LTE spectral line formation in 3D radiation-hydrodynamic atmospheres of red giants, the smaller siblings of red supergiants.

  13. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  14. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  15. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  16. Resonant behaviour of MHD waves on magnetic flux tubes. I - Connection formulae at the resonant surfaces. II - Absorption of sound waves by sunspots

    NASA Technical Reports Server (NTRS)

    Sakurai, Takashi; Goossens, Marcel; Hollweg, Joseph V.

    1991-01-01

    The present method of addressing the resonance problems that emerge in such MHD phenomena as the resonant absorption of waves at the Alfven resonance point avoids solving the fourth-order differential equation of dissipative MHD by recourse to connection formulae across the dissipation layer. In the second part of this investigation, the absorption of solar 5-min oscillations by sunspots is interpreted as the resonant absorption of sounds by a magnetic cylinder. The absorption coefficient is interpreted (1) analytically, under certain simplifying assumptions, and numerically, under more general conditions. The observed absorption coefficient magnitude is explained over suitable parameter ranges.

  17. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  18. Parachute dynamics and stability analysis. [using nonlinear differential equations of motion

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. K.; Engdahl, R. A.

    1974-01-01

    The nonlinear differential equations of motion for a general parachute-riser-payload system are developed. The resulting math model is then applied for analyzing the descent dynamics and stability characteristics of both the drogue stabilization phase and the main descent phase of the space shuttle solid rocket booster (SRB) recovery system. The formulation of the problem is characterized by a minimum number of simplifying assumptions and full application of state-of-the-art parachute technology. The parachute suspension lines and the parachute risers can be modeled as elastic elements, and the whole system may be subjected to specified wind and gust profiles in order to assess their effects on the stability of the recovery system.

  19. A practical method of predicting the loudness of complex electrical stimuli

    NASA Astrophysics Data System (ADS)

    McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.

    2003-04-01

    The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.

  20. The risk of collapse in abandoned mine sites: the issue of data uncertainty

    NASA Astrophysics Data System (ADS)

    Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi

    2016-04-01

    Ground collapses over abandoned underground mines constitute a new environmental risk in the world. The high risk associated with subsurface voids, together with lack of knowledge of the geometric and geomechanical features of mining areas, makes abandoned underground mines one of the current challenges for countries with a long mining history. In this study, a stability analysis of Montevecchia marl mine is performed in order to validate a general approach that takes into account the poor local information and the variability of the input data. The collapse risk was evaluated through a numerical approach that, starting with some simplifying assumptions, is able to provide an overview of the collapse probability. The final results is an easy-accessible-transparent summary graph that shows the collapse probability. This approach may be useful for public administrators called upon to manage this environmental risk. The approach tries to simplify this complex problem in order to achieve a roughly risk assessment, but, since it relies on just a small amount of information, any final user should be aware that a comprehensive and detailed risk scenario can be generated only through more exhaustive investigations.

  1. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  2. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  3. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  4. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  5. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  6. 49 CFR 565.10 - Purpose and scope.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS VIN Requirements... vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  7. 49 CFR 565.10 - Purpose and scope.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS VIN Requirements... vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  8. 49 CFR 565.10 - Purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS VIN Requirements... vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  9. 49 CFR 565.10 - Purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS VIN Requirements... vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  10. 49 CFR 565.10 - Purpose and scope.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS VIN Requirements... vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  11. Glistening-region model for multipath studies

    NASA Astrophysics Data System (ADS)

    Groves, Gordon W.; Chow, Winston C.

    1998-07-01

    The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.

  12. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  13. A simplified Integer Cosine Transform and its application in image compression

    NASA Technical Reports Server (NTRS)

    Costa, M.; Tong, K.

    1994-01-01

    A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.

  14. Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan

    2016-10-01

    The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.

  15. Validation of scaffold design optimization in bone tissue engineering: finite element modeling versus designed experiments.

    PubMed

    Uth, Nicholas; Mueller, Jens; Smucker, Byran; Yousefi, Azizeh-Mitra

    2017-02-21

    This study reports the development of biological/synthetic scaffolds for bone tissue engineering (TE) via 3D bioplotting. These scaffolds were composed of poly(L-lactic-co-glycolic acid) (PLGA), type I collagen, and nano-hydroxyapatite (nHA) in an attempt to mimic the extracellular matrix of bone. The solvent used for processing the scaffolds was 1,1,1,3,3,3-hexafluoro-2-propanol. The produced scaffolds were characterized by scanning electron microscopy, microcomputed tomography, thermogravimetric analysis, and unconfined compression test. This study also sought to validate the use of finite-element optimization in COMSOL Multiphysics for scaffold design. Scaffold topology was simplified to three factors: nHA content, strand diameter, and strand spacing. These factors affect the ability of the scaffold to bear mechanical loads and how porous the structure can be. Twenty four scaffolds were constructed according to an I-optimal, split-plot designed experiment (DE) in order to generate experimental models of the factor-response relationships. Within the design region, the DE and COMSOL models agreed in their recommended optimal nHA (30%) and strand diameter (460 μm). However, the two methods disagreed by more than 30% in strand spacing (908 μm for DE; 601 μm for COMSOL). Seven scaffolds were 3D-bioplotted to validate the predictions of DE and COMSOL models (4.5-9.9 MPa measured moduli). The predictions for these scaffolds showed relative agreement for scaffold porosity (mean absolute percentage error of 4% for DE and 13% for COMSOL), but were substantially poorer for scaffold modulus (51% for DE; 21% for COMSOL), partly due to some simplifying assumptions made by the models. Expanding the design region in future experiments (e.g., higher nHA content and strand diameter), developing an efficient solvent evaporation method, and exerting a greater control over layer overlap could allow developing PLGA-nHA-collagen scaffolds to meet the mechanical requirements for bone TE.

  16. Modeling climate change impact in hospitality sector, using building resources consumption signature

    NASA Astrophysics Data System (ADS)

    Pinto, Armando; Bernardino, Mariana; Silva Santos, António; Pimpão Silva, Álvaro; Espírito Santo, Fátima

    2016-04-01

    Hotels are one of building types that consumes more energy and water per person and are vulnerable to climate change because in the occurrence of extreme events (heat waves, water stress) same failures could compromise the hotel services (comfort) and increase energy cost or compromise the landscape and amenities due to water use restrictions. Climate impact assessments and the development of adaptation strategies require the knowledge about critical climatic variables and also the behaviour of building. To study the risk and vulnerability of buildings and hotels to climate change regarding resources consumption (energy and water), previous studies used building energy modelling simulation (BEMS) tools to study the variation in energy and water consumption. In general, the climate change impact in building is evaluated studying the energy and water demand of the building for future climate scenarios. But, hotels are complex buildings, quite different from each other and assumption done in simplified BEMS aren't calibrated and usually neglect some important hotel features leading to projected estimates that do not usually match hotel sector understanding and practice. Taking account all uncertainties, the use of building signature (statistical method) could be helpful to assess, in a more clear way, the impact of Climate Change in the hospitality sector and using a broad sample. Statistical analysis of the global energy consumption obtained from bills shows that the energy consumption may be predicted within 90% confidence interval only with the outdoor temperature. In this article a simplified methodology is presented and applied to identify the climate change impact in hospitality sector using the building energy and water signature. This methodology is applied to sixteen hotels (nine in Lisbon and seven in Algarve) with four and five stars rating. The results show that is expect an increase in water and electricity consumption (manly due to the increase in cooling) and a decrease in gas consumption (for heating). The hotels in Algarve are more vulnerable than Lisbon hotels.

  17. A hip joint simulator study using simplified loading and motion cycles generating physiological wear paths and rates.

    PubMed

    Barbour, P S; Stone, M H; Fisher, J

    1999-01-01

    In some designs of hip joint simulator the cost of building a highly complex machine has been offset with the requirement for a large number of test stations. The application of the wear results generated by these machines depends on their ability to reproduce physiological wear rates and processes. In this study a hip joint simulator has been shown to reproduce physiological wear using only one load vector and two degrees of motion with simplified input cycles. The actual path of points on the femoral head relative to the acetabular cup were calculated and compared for physiological and simplified input cycles. The in vitro wear rates were found to be highly dependent on the shape of these paths and similarities could be drawn between the shape of the physiological paths and the simplified elliptical paths.

  18. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  19. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  20. 49 CFR 565.20 - Purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS Alternative VIN... and physical requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of...

  1. 49 CFR 565.1 - Purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS General... requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  2. 49 CFR 565.1 - Purpose and scope.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS General... requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  3. 49 CFR 565.20 - Purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS Alternative VIN... and physical requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of...

  4. 49 CFR 565.1 - Purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS General... requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  5. 49 CFR 565.20 - Purpose and scope.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS Alternative VIN... and physical requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of...

  6. 49 CFR 565.20 - Purpose and scope.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS Alternative VIN... and physical requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of...

  7. 49 CFR 565.1 - Purpose and scope.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS General... requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  8. 49 CFR 565.20 - Purpose and scope.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS Alternative VIN... and physical requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of...

  9. 49 CFR 565.1 - Purpose and scope.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLE IDENTIFICATION NUMBER (VIN) REQUIREMENTS General... requirements for a vehicle identification number (VIN) system and its installation to simplify vehicle identification information retrieval and to increase the accuracy and efficiency of vehicle recall campaigns. ...

  10. 75 FR 34277 - Federal Acquisition Regulation; FAR Case 2008-007, Additional Requirements for Market Research

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-16

    ...The Civilian Agency Acquisition Council and the Defense Acquisition Regulations Council (Councils) have agreed on an interim rule amending the Federal Acquisition Regulation (FAR) to implement Section 826 of the National Defense Authorization Act for Fiscal Year 2008 (FY08 NDAA). Section 826 established additional requirements in subsection (c) of 10 U.S.C. 2377. As a matter of policy, these requirements are extended to all executive agencies. Specifically, the head of the agency must conduct market research before issuing an indefinite-delivery indefinite-quantity (ID/IQ) task or delivery order for a noncommercial item in excess of the simplified acquisition threshold. In addition, a prime contractor with a contract in excess of $5 million for the procurement of items other than commercial items is required to conduct market research before making purchases that exceed the simplified acquisition threshold for or on behalf of the Government.

  11. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  12. A multigenerational effect of parental age on offspring size but not fitness in common duckweed (Lemna minor).

    PubMed

    Barks, P M; Laird, R A

    2016-04-01

    Classic theories on the evolution of senescence make the simplifying assumption that all offspring are of equal quality, so that demographic senescence only manifests through declining rates of survival or fecundity. However, there is now evidence that, in addition to declining rates of survival and fecundity, many organisms are subject to age-related declines in the quality of offspring produced (i.e. parental age effects). Recent modelling approaches allow for the incorporation of parental age effects into classic demographic analyses, assuming that such effects are limited to a single generation. Does this 'single-generation' assumption hold? To find out, we conducted a laboratory study with the aquatic plant Lemna minor, a species for which parental age effects have been demonstrated previously. We compared the size and fitness of 423 laboratory-cultured plants (asexually derived ramets) representing various birth orders, and ancestral 'birth-order genealogies'. We found that offspring size and fitness both declined with increasing 'immediate' birth order (i.e. birth order with respect to the immediate parent), but only offspring size was affected by ancestral birth order. Thus, the assumption that parental age effects on offspring fitness are limited to a single generation does in fact hold for L. minor. This result will guide theorists aiming to refine and generalize modelling approaches that incorporate parental age effects into evolutionary theory on senescence. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  13. Simplified power processing for ion-thruster subsystems

    NASA Technical Reports Server (NTRS)

    Wessel, F. J.; Hancock, D. J.

    1983-01-01

    A design for a greatly simplified power-processing unit (SPPU) for the 8-cm diameter mercury-ion-thruster subsystem is discussed. This SPPU design will provide a tenfold reduction in parts count, a decrease in system mass and cost, and an increase in system reliability compared to the existing power-processing unit (PPU) used in the Hughes/NASA Lewis Research Center Ion Auxiliary Propulsion Subsystem. The simplifications achieved in this design will greatly increase the attractiveness of ion propulsion in near-term and future spacecraft propulsion applications. A description of a typical ion-thruster subsystem is given. An overview of the thruster/power-processor interface requirements is given. Simplified thruster power processing is discussed.

  14. Evaluation of a simplified gross thrust calculation method for a J85-21 afterburning turbojet engine in an altitude facility

    NASA Technical Reports Server (NTRS)

    Baer-Riedhart, J. L.

    1982-01-01

    A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.

  15. Green Net Regional Product for the San Luis Basin, Colorado: an economic measure of regional sustainability.

    PubMed

    Heberling, Matthew T; Templeton, Joshua J; Wu, Shanshan

    2012-11-30

    This paper presents the data sources and methodology used to estimate Green Net Regional Product (GNRP), a green accounting approach, for the San Luis Basin (SLB). We measured the movement away from sustainability by examining the change in GNRP over time. Any attempt at green accounting requires both economic and natural capital data. However, limited data for the Basin requires a number of simplifying assumptions and requires transforming economic data at the national, state, and county levels to the level of the SLB. Given the contribution of agribusiness to the SLB, we included the depletion of both groundwater and soil as components in the depreciation of natural capital. We also captured the effect of the consumption of energy on climate change for future generations through carbon dioxide (CO(2)) emissions. In order to estimate the depreciation of natural capital, the shadow price of water for agriculture, the economic damages from soil erosion due to wind, and the social cost of carbon emissions were obtained from the literature and applied to the SLB using benefit transfer. We used Colorado's total factor productivity for agriculture to estimate the value of time (i.e., to include the effects of exogenous technological progress). We aggregated the economic data and the depreciation of natural capital for the SLB from 1980 to 2005. The results suggest that GNRP had a slight upward trend through most of this time period, despite temporary negative trends, the longest of which occurred during the period 1985-86 to 1987-88. However, given the upward trend in GNRP and the possibility of business cycles causing the temporary declines, there is no definitive evidence of moving away from sustainability. Published by Elsevier Ltd.

  16. On the evolution of misunderstandings about evolutionary psychology.

    PubMed

    Young, J; Persell, R

    2000-04-01

    Some of the controversy surrounding evolutionary explanations of human behavior may be due to cognitive information-processing patterns that are themselves the result of evolutionary processes. Two such patterns are (1) the tendency to oversimplify information so as to reduce demand on cognitive resources and (2) our strong desire to generate predictability and stability from perceptions of the external world. For example, research on social stereotyping has found that people tend to focus automatically on simplified social-categorical information, to use such information when deciding how to behave, and to rely on such information even in the face of contradictory evidence. Similarly, an undying debate over nature vs. nurture is shaped by various data-reduction strategies that frequently oversimplify, and thus distort, the intent of the supporting arguments. This debate is also often marked by an assumption that either the nature or the nurture domain may be justifiably excluded at an explanatory level because one domain appears to operate in a sufficiently stable and predictable way for a particular argument. As a result, critiques in-veighed against evolutionary explanations of behavior often incorporate simplified--and erroneous--assumptions about either the mechanics of how evolution operates or the inevitable implications of evolution for understanding human behavior. The influences of these tendencies are applied to a discussion of the heritability of behavioral characteristics. It is suggested that the common view that Mendelian genetics can explain the heritability of complex behaviors, with a one-gene-one-trait process, is misguided. Complex behaviors are undoubtedly a product of a more complex interaction between genes and environment, ensuring that both nature and nurture must be accommodated in a yet-to-be-developed post-Mendelian model of genetic influence. As a result, current public perceptions of evolutionary explanations of behavior are handicapped by the lack of clear articulation of the relationship between inherited genes and manifest behavior.

  17. Low Thrust Cis-Lunar Transfers Using a 40 kW-Class Solar Electric Propulsion Spacecraft

    NASA Technical Reports Server (NTRS)

    Mcguire, Melissa L.; Burke, Laura M.; Mccarty, Steven L.; Hack, Kurt J.; Whitley, Ryan J.; Davis, Diane C.; Ocampo, Cesar

    2017-01-01

    This paper captures trajectory analysis of a representative low thrust, high power Solar Electric Propulsion (SEP) vehicle to move a mass around cis-lunar space in the range of 20 to 40 kW power to the Electric Propulsion (EP) system. These cis-lunar transfers depart from a selected Near Rectilinear Halo Orbit (NRHO) and target other cis-lunar orbits. The NRHO cannot be characterized in the classical two-body dynamics more familiar in the human spaceflight community, and the use of low thrust orbit transfers provides unique analysis challenges. Among the target orbit destinations documented in this paper are transfers between a Southern and Northern NRHO, transfers between the NRHO and a Distant Retrograde Orbit (DRO) and a transfer between the NRHO and two different Earth Moon Lagrange Point 2 (EML2) Halo orbits. Because many different NRHOs and EML2 halo orbits exist, simplifying assumptions rely on previous analysis of orbits that meet current abort and communication requirements for human mission planning. Investigation is done into the sensitivities of these low thrust transfers to EP system power. Additionally, the impact of the Thrust to Weight ratio of these low thrust SEP systems and the ability to transit between these unique orbits are investigated.

  18. Evaluation of a Stochastic Inactivation Model for Heat-Activated Spores of Bacillus spp. ▿

    PubMed Central

    Corradini, Maria G.; Normand, Mark D.; Eisenberg, Murray; Peleg, Micha

    2010-01-01

    Heat activates the dormant spores of certain Bacillus spp., which is reflected in the “activation shoulder” in their survival curves. At the same time, heat also inactivates the already active and just activated spores, as well as those still dormant. A stochastic model based on progressively changing probabilities of activation and inactivation can describe this phenomenon. The model is presented in a fully probabilistic discrete form for individual and small groups of spores and as a semicontinuous deterministic model for large spore populations. The same underlying algorithm applies to both isothermal and dynamic heat treatments. Its construction does not require the assumption of the activation and inactivation kinetics or knowledge of their biophysical and biochemical mechanisms. A simplified version of the semicontinuous model was used to simulate survival curves with the activation shoulder that are reminiscent of experimental curves reported in the literature. The model is not intended to replace current models to predict dynamic inactivation but only to offer a conceptual alternative to their interpretation. Nevertheless, by linking the survival curve's shape to probabilities of events at the individual spore level, the model explains, and can be used to simulate, the irregular activation and survival patterns of individual and small groups of spores, which might be involved in food poisoning and spoilage. PMID:20453137

  19. Development of mapped stress-field boundary conditions based on a Hill-type muscle model.

    PubMed

    Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A

    2014-09-01

    Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Formation of Bipolar Lobes by Jets

    NASA Astrophysics Data System (ADS)

    Soker, Noam

    2002-04-01

    I conduct an analytical study of the interaction of jets, or a collimated fast wind (CFW), with a previously blown asymptotic giant branch (AGB) slow wind. Such jets (or CFWs) are supposedly formed when a compact companion, a main-sequence star, or a white dwarf accretes mass from the AGB star, forms an accretion disk, and blows two jets. This type of flow, which I think shapes bipolar planetary nebulae (PNs), requires three-dimensional gasdynamical simulations, which are limited in the parameter space they can cover. By imposing several simplifying assumptions, I derive simple expressions which reproduce some basic properties of lobes in bipolar PNs and which can be used to guide future numerical simulations. I quantitatively apply the results to two proto-PNs. I show that the jet interaction with the slow wind can form lobes which are narrow close to, and far away from, the central binary system, and which are wider somewhere in between. Jets that are recollimated and have constant cross section can form cylindrical lobes with constant diameter, as observed in several bipolar PNs. Close to their source, jets blown by main-sequence companions are radiative; only further out they become adiabatic, i.e., they form high-temperature, low-density bubbles that inflate the lobes.

  1. Morphology of blazar-induced gamma ray halos due to a helical intergalactic magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Andrew J.; Vachaspati, Tanmay, E-mail: andrewjlong@asu.edu, E-mail: tvachasp@asu.edu

    We study the characteristic size and shape of idealized blazar-induced cascade halos in the 1–100,GeV energy range assuming various non-helical and helical configurations for the intergalactic magnetic field (IGMF). While the magnetic field creates an extended halo, the helicity provides the halo with a twist. Under simplifying assumptions, we assess the parameter regimes for which it is possible to measure the size and shape of the halo from a single source and then to deduce properties of the IGMF. We find that blazar halo measurements with an experiment similar to Fermi-LAT are best suited to probe a helical magnetic fieldmore » with strength and coherence length today in the ranges 10{sup −17} ∼< B{sub 0} / Gauss ∼< 10{sup −13} and 10 Mpc ∼< λ ∼< 10 Gpc where H ∼ B{sub 0}{sup 2} / λ is the magnetic helicity density. Stronger magnetic fields or smaller coherence scales can still potentially be investigated, but the connection between the halo morphology and the magnetic field properties is more involved. Weaker magnetic fields or longer coherence scales require high photon statistics or superior angular resolution.« less

  2. Predicting the Effects of Powder Feeding Rates on Particle Impact Conditions and Cold Spray Deposited Coatings

    NASA Astrophysics Data System (ADS)

    Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.

    2017-10-01

    As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.

  3. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  4. Simplifying the writing process for the novice writer.

    PubMed

    Redmond, Mary Connie

    2002-10-01

    Nurses take responsibility for reading information to update their professional knowledge and to meet relicensure requirements. However, nurses are less enthusiastic about writing for professional publication. This article explores the reluctance of nurses to write, the reasons why writing for publication is important to the nursing profession, the importance of mentoring to potential writers, and basic information about simplifying the writing process for novice writers. Copyright 2002 by American Society of PeriAnesthesia Nurses.

  5. Alternatives for discounting in the analysis of noninferiority trials.

    PubMed

    Snapinn, Steven M

    2004-05-01

    Determining the efficacy of an experimental therapy relative to placebo on the basis of an active-control noninferiority trial requires reference to historical placebo-controlled trials. The validity of the resulting comparison depends on two key assumptions: assay sensitivity and constancy. Since the truth of these assumptions cannot be verified, it seems logical to raise the standard of evidence required to declare efficacy; this concept is referred to as discounting. It is not often recognized that two common design and analysis approaches, setting a noninferiority margin and requiring preservation of a fraction of the standard therapy's effect, are forms of discounting. The noninferiority margin is a particularly poor approach, since its degree of discounting depends on an irrelevant factor. Preservation of effect is more reasonable, but it addresses only the constancy assumption, not the issue of assay sensitivity. Gaining consensus on the most appropriate approach to the design and analysis of noninferiority trials will require a common understanding of the concept of discounting.

  6. The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks

    NASA Astrophysics Data System (ADS)

    Ristenpart, Thomas; Yilek, Scott

    Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.

  7. Panel Absorber

    NASA Astrophysics Data System (ADS)

    MECHEL, F. P.

    2001-11-01

    A plane wave is incident on a simply supported elastic plate covering a back volume; the arrangement is surrounded by a hard baffle wall. The plate may be porous with a flow friction resistance; the back volume may be filled either with air or with a porous material. The back volume may be bulk reacting (i.e., with sound propagation parallel to the plate) or locally reacting. Since this arrangement is of some importance in room acoustics, Cremer in his book about room acoustics [1] has presented an approximate analysis. However, Cremer's analysis uses a number of assumptions which make his solution, in his own estimate, unsuited for low frequencies, where, on the other hand, the arrangement mainly is applied. This paper presents a sound field description which uses modal analysis. It is applicable not only in the far field, but also near the absorber. Further, approximate solutions are derived, based on simplifying assumptions like Cremer has used. The modal analysis solution is of interest not only as a reference for approximations but also for practical applications, because the aspect of computing time becomes more and more unimportant (the 3D-plots presented below for the sound field were evaluated with modal analysis in about 6 s).

  8. Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings

    PubMed Central

    LaFave, Daniel; Thomas, Duncan

    2016-01-01

    The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430

  9. Inferences about unobserved causes in human contingency learning.

    PubMed

    Hagmayer, York; Waldmann, Michael R

    2007-03-01

    Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.

  10. The evolutionary interplay of intergroup conflict and altruism in humans: a review of parochial altruism theory and prospects for its extension

    PubMed Central

    Rusch, Hannes

    2014-01-01

    Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. PMID:25253457

  11. Direct numerical simulation of leaky dielectrics with application to electrohydrodynamic atomization

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Desjardins, Olivier

    2013-11-01

    Electrohydrodynamics (EHD) have the potential to greatly enhance liquid break-up, as demonstrated in numerical simulations by Van Poppel et al. (JCP (229) 2010). In liquid-gas EHD flows, the ratio of charge mobility to charge convection timescales can be used to determine whether the charge can be assumed to exist in the bulk of the liquid or at the surface only. However, for EHD-aided fuel injection applications, these timescales are of similar magnitude and charge mobility within the fluid might need to be accounted for explicitly. In this work, a computational approach for simulating two-phase EHD flows including the charge transport equation is presented. Under certain assumptions compatible with a leaky dielectric model, charge transport simplifies to a scalar transport equation that is only defined in the liquid phase, where electric charges are present. To ensure consistency with interfacial transport, the charge equation is solved using a semi-Lagrangian geometric transport approach, similar to the method proposed by Le Chenadec and Pitsch (JCP (233) 2013). This methodology is then applied to EHD atomization of a liquid kerosene jet, and compared to results produced under the assumption of a bulk volumetric charge.

  12. Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013)

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2013-01-01

    Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…

  13. Improving the Representation of Snow Crystal Properties with a Single-Moment Mircophysics Scheme

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Demek, Scott R.

    2010-01-01

    Single-moment microphysics schemes are utilized in an increasing number of applications and are widely available within numerical modeling packages, often executed in near real-time to aid in the issuance of weather forecasts and advisories. In order to simulate cloud microphysical and precipitation processes, a number of assumptions are made within these schemes. Snow crystals are often assumed to be spherical and of uniform density, and their size distribution intercept may be fixed to simplify calculation of the remaining parameters. Recently, the Canadian CloudSat/CALIPSO Validation Project (C3VP) provided aircraft observations of snow crystal size distributions and environmental state variables, sampling widespread snowfall associated with a passing extratropical cyclone on 22 January 2007. Aircraft instrumentation was supplemented by comparable surface estimations and sampling by two radars: the C-band, dual-polarimetric radar in King City, Ontario and the NASA CloudSat 94 GHz Cloud Profiling Radar. As radar systems respond to both hydrometeor mass and size distribution, they provide value when assessing the accuracy of cloud characteristics as simulated by a forecast model. However, simulation of the 94 GHz radar signal requires special attention, as radar backscatter is sensitive to the assumed crystal shape. Observations obtained during the 22 January 2007 event are used to validate assumptions of density and size distribution within the NASA Goddard six-class single-moment microphysics scheme. Two high resolution forecasts are performed on a 9-3-1 km grid, with C3VP-based alternative parameterizations incorporated and examined for improvement. In order to apply the CloudSat 94 GHz radar to model validation, the single scattering characteristics of various crystal types are used and demonstrate that the assumption of Mie spheres is insufficient for representing CloudSat reflectivity derived from winter precipitation. Furthermore, snow density and size distribution characteristics are allowed to vary with height, based upon direct aircraft estimates obtained from C3VP data. These combinations improve the representation of modeled clouds versus their radar-observed counterparts, based on profiles and vertical distributions of reflectivity. These meteorological events are commonplace within the mid-latitude cold season and present a challenge to operational forecasters. This study focuses on one event, likely representative of others during the winter season, and aims to improve the representation of snow for use in future operational forecasts.

  14. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  15. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dannberg, Juliane; Dohmen, Janik; Kalousova, Klara; Maurice, Maxim; Noack, Lena; Plesa, Ana; Soucek, Ondrej; Spiegelman, Marc; Thieulot, Cedric; Tosi, Nicola; Wallner, Herbert

    2016-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we carry out a benchmark comparison. The reference model is taken from the mantle convection benchmark, cases 1a to 1c (Blankenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and Rayleigh numbers of 104 to 10^6. Melting is modelled using a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) five cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 is identical to case 1 except that latent heat is switched on. Case 3 includes batch melting, melt buoyancy (melt Rayleigh number Rm) and depletion buoyancy, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms), the maximum and the total melt volume and qm approaching a statistical steady state. Case 4 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases are carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. For cases 1 - 3 very good agreement is achieved among the various participating codes. For case 4 melting/freezing formulations require some attention to avoid sub-solidus melt fractions. A case 5 is planned where all melt will be extracted and, reinserted in a shallow region above the melted plume. The motivation of this presentation is to summarize first experiences and to finalize the case definitions. References: Blankenbach, B., Busse, F., Christensen, U., Cserepes, L. Gunkel, D., Hansen, U., Harder, H. Jarvis, G., Koch, M., Marquart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geophys., 98, 23-38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  16. Spontaneously Broken Neutral Symmetry in an Ecological System

    NASA Astrophysics Data System (ADS)

    Borile, C.; Muñoz, M. A.; Azaele, S.; Banavar, Jayanth R.; Maritan, A.

    2012-07-01

    Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest)—regardless of the species they belong to—have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species, treating the dynamics in a species-symmetric manner. We demonstrate that dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.

  17. Research study on high energy radiation effect and environment solar cell degradation methods

    NASA Technical Reports Server (NTRS)

    Horne, W. E.; Wilkinson, M. C.

    1974-01-01

    The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.

  18. A methodology to select a wire insulation for use in habitable spacecraft.

    PubMed

    Paulos, T; Apostolakis, G

    1998-08-01

    This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.

  19. A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents

    NASA Technical Reports Server (NTRS)

    Rosen, J. M.; Hofmann, D. J.

    1977-01-01

    A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.

  20. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

Top