Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark; Bacon, John
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.
Analyses of School Commuting Data for Exposure Modeling Purposes
Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...
Stirling Engine External Heat System Design with Heat Pipe Heater.
1986-07-01
Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the
Longitudinal stability in relation to the use of an automatic pilot
NASA Technical Reports Server (NTRS)
Klemin, Alexander; Pepper, Perry A; Wittner, Howard A
1938-01-01
The effect of restraint in pitching introduced by an automatic pilot upon the longitudinal stability of an airplane has been studied. Customary simplifying assumptions have been made in setting down the equations of motion, and the results of computations based on the simplified equations are presented to show the effect of an automatic pilot installed in an airplane of known dimensions and characteristics. The equations developed have been applied by making calculations for a Clark biplane and a Fairchild 22 monoplane.
A Mass Tracking Formulation for Bubbles in Incompressible Flow
2012-10-14
incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow
Relating color working memory and color perception.
Allred, Sarah R; Flombaum, Jonathan I
2014-11-01
Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.
BASEFLOW SEPARATION BASED ON ANALYTICAL SOLUTIONS OF THE BOUSSINESQ EQUATION. (R824995)
A technique for baseflow separation is presented based on similarity solutions of the Boussinesq equation. The method makes use of the simplifying assumptions that a horizontal impermeable layer underlies a Dupuit aquifer which is drained by a fully penetratin...
The Role of Semantic Clustering in Optimal Memory Foraging
ERIC Educational Resources Information Center
Montez, Priscilla; Thompson, Graham; Kello, Christopher T.
2015-01-01
Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in…
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Bacon, John B.; Matney, Mark
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.
Creating Matched Samples Using Exact Matching. Statistical Report 2016-3
ERIC Educational Resources Information Center
Godfrey, Kelly E.
2016-01-01
By creating and analyzing matched samples, researchers can simplify their analyses to include fewer covariate variables, relying less on model assumptions, and thus generating results that may be easier to report and interpret. When two groups essentially "look" the same, it is easier to explore their differences and make comparisons…
Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2008-01-01
This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-20
... simplify some assumptions and to make estimation methods consistent; and characterization as Agency burden...-1007 to (1) EPA online using http://www.regulations.gov (our preferred method), by e-mail to oppt.ncic...-HQ-OPPT-2010-1007, which is available for online viewing at http://www.regulations.gov , or in person...
M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples
2013-01-01
Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...
Statistical Issues for Uncontrolled Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark
2008-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.
Gas Diffusion in Fluids Containing Bubbles
NASA Technical Reports Server (NTRS)
Zak, M.; Weinberg, M. C.
1982-01-01
Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Tutwiler, M. Shane
2014-01-01
This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…
Impact of unseen assumptions on communication of atmospheric carbon mitigation options
NASA Astrophysics Data System (ADS)
Elliot, T. R.; Celia, M. A.; Court, B.
2010-12-01
With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Johannes; Merle, Alexander; Totzauer, Maximilian
We investigate the early Universe production of sterile neutrino Dark Matter by the decays of singlet scalars. All previous studies applied simplifying assumptions and/or studied the process only on the level of number densities, which makes it impossible to give statements about cosmic structure formation. We overcome these issues by dropping all simplifying assumptions (except for one we showed earlier to work perfectly) and by computing the full course of Dark Matter production on the level of non-thermal momentum distribution functions. We are thus in the position to study a broad range of aspects of the resulting settings and applymore » a broad set of bounds in a reliable manner. We have a particular focus on how to incorporate bounds from structure formation on the level of the linear power spectrum, since the simplistic estimate using the free-streaming horizon clearly fails for highly non-thermal distributions. Our work comprises the most detailed and comprehensive study of sterile neutrino Dark Matter production by scalar decays presented so far.« less
Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba
2013-12-01
Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.
Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm
Yang, Shang-Te; Ling, Hao
2013-01-01
An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
The 3D dynamics of the Cosserat rod as applied to continuum robotics
NASA Astrophysics Data System (ADS)
Jones, Charles Rees
2011-12-01
In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.
The Boltzmann equation in the difference formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szoke, Abraham; Brooks III, Eugene D.
2015-05-06
First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
Sequential Auctions with Partially Substitutable Goods
NASA Astrophysics Data System (ADS)
Vetsikas, Ioannis A.; Jennings, Nicholas R.
In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Aerodynamic effects of nearly uniform slipstreams on thin wings in the transonic regime
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1980-01-01
A simplified model is used to describe the interaction between a propeller slipstream and a wing in the transonic regime. The undisturbed slipstream boundary is assumed to coincide with an infinite circular cylinder. The undisturbed slipstream velocity is rotational and is a function of the radius only. In general, the velocity perturbation caused by introducing a wing into the slipstream is also rotational. By making small disturbance assumptions, however, the perturbation velocity becomes nearly potential, and an approximation for the flow is obtained by solving a potential equation.
Interplanetary magnetic flux - Measurement and balance
NASA Technical Reports Server (NTRS)
Mccomas, D. J.; Gosling, J. T.; Phillips, J. L.
1992-01-01
A new method for determining the approximate amount of magnetic flux in various solar wind structures in the ecliptic (and solar rotation) plane is developed using single-spacecraft measurements in interplanetary space and making certain simplifying assumptions. The method removes the effect of solar wind velocity variations and can be applied to specific, limited-extent solar wind structures as well as to long-term variations. Over the 18-month interval studied, the ecliptic plane flux of coronal mass ejections was determined to be about 4 times greater than that of HFDs.
Ye, Linqi; Zong, Qun; Tian, Bailing; Zhang, Xiuyun; Wang, Fang
2017-09-01
In this paper, the nonminimum phase problem of a flexible hypersonic vehicle is investigated. The main challenge of nonminimum phase is the prevention of dynamic inversion methods to nonlinear control design. To solve this problem, we make research on the relationship between nonminimum phase and backstepping control, finding that a stable nonlinear controller can be obtained by changing the control loop on the basis of backstepping control. By extending the control loop to cover the internal dynamics in it, the internal states are directly controlled by the inputs and simultaneously serve as virtual control for the external states, making it possible to guarantee output tracking as well as internal stability. Then, based on the extended control loop, a simplified control-oriented model is developed to enable the applicability of adaptive backstepping method. It simplifies the design process and releases some limitations caused by direct use of the no simplified control-oriented model. Next, under proper assumptions, asymptotic stability is proved for constant commands, while bounded stability is proved for varying commands. The proposed method is compared with approximate backstepping control and dynamic surface control and is shown to have superior tracking accuracy as well as robustness from the simulation results. This paper may also provide a beneficial guidance for control design of other complex systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.
Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee
2016-06-01
Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.
The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth
NASA Astrophysics Data System (ADS)
Zentner, Andrew R.
I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.
Edemagenic gain and interstitial fluid volume regulation.
Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A
2008-02-01
Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.
Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S
2006-08-01
The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.
The Embedding Problem for Markov Models of Nucleotide Substitution
Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.
2013-01-01
Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949
Miller, G Edward; Selden, Thomas M
2013-01-01
Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400
Investigations in a Simplified Bracketed Grid Approach to Metrical Structure
ERIC Educational Resources Information Center
Liu, Patrick Pei
2010-01-01
In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…
NASA Technical Reports Server (NTRS)
Tower, L. K.
1973-01-01
The diffusion of oxygen into, or out of, a gettered alloy exposed to oxygenated alkali liquid metal coolant, a situation arising in some high temperature heat transfer systems, was analyzed. The relation between the diffusion process and the thermochemistry of oxygen in the alloy and in the alkali metal was developed by making several simplifying assumptions. The treatment is therefore theoretical in nature. However, a practical example pertaining to the startup of a heat pipe with walls of T-111, a tantalum alloy, and lithium working fluid illustrates the use of the figures contained in the analysis.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Pendulum Motion and Differential Equations
ERIC Educational Resources Information Center
Reid, Thomas F.; King, Stephen C.
2009-01-01
A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…
Simplified subsurface modelling: data assimilation and violated model assumptions
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Lange, Natascha; Neuweiler, Insa
2017-04-01
Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.
On the coupling of fluid dynamics and electromagnetism at the top of the earth's core
NASA Technical Reports Server (NTRS)
Benton, E. R.
1985-01-01
A kinematic approach to short-term geomagnetism has recently been based upon pre-Maxwell frozen-flux electromagnetism. A complete dynamic theory requires coupling fluid dynamics to electromagnetism. A geophysically plausible simplifying assumption for the vertical vorticity balance, namely that the vertical Lorentz torque is negligible, is introduced and its consequences are developed. The simplified coupled magnetohydrodynamic system is shown to conserve a variety of magnetic and vorticity flux integrals. These provide constraints on eligible models for the geomagnetic main field, its secular variation, and the horizontal fluid motions at the top of the core, and so permit a number of tests of the underlying assumptions.
Glistening-region model for multipath studies
NASA Astrophysics Data System (ADS)
Groves, Gordon W.; Chow, Winston C.
1998-07-01
The goal is to achieve a model of radar sea reflection with improved fidelity that is amenable to practical implementation. The geometry of reflection from a wavy surface is formulated. The sea surface is divided into two components: the smooth `chop' consisting of the longer wavelengths, and the `roughness' of the short wavelengths. Ordinary geometric reflection from the chop surface is broadened by the roughness. This same representation serves both for forward scatter and backscatter (sea clutter). The `Road-to-Happiness' approximation, in which the mean sea surface is assumed cylindrical, simplifies the reflection geometry for low-elevation targets. The effect of surface roughness is assumed to make the sea reflection coefficient depending on the `Deviation Angle' between the specular and the scattering directions. The `specular' direction is that into which energy would be reflected by a perfectly smooth facet. Assuming that the ocean waves are linear and random allows use of Gaussian statistics, greatly simplifying the formulation by allowing representation of the sea chop by three parameters. An approximation of `low waves' and retention of the sea-chop slope components only through second order provides further simplification. The simplifying assumptions make it possible to take the predicted 2D ocean wave spectrum into account in the calculation of sea-surface radar reflectivity, to provide algorithms for support of an operational system for dealing with target tracking in the presence of multipath. The product will be of use in simulated studies to evaluate different trade-offs in alternative tracking schemes, and will form the basis of a tactical system for ship defense against low flyers.
Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation
Yu, Hongyi
2018-01-01
A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601
Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.
Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi
2018-03-17
A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.
Quantum-like dynamics applied to cognition: a consideration of available options
NASA Astrophysics Data System (ADS)
Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.
2017-10-01
Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Data reduction of room tests for zone model validation
M. Janssens; H. C. Tran
1992-01-01
Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...
Guidelines and Metrics for Assessing Space System Cost Estimates
2008-01-01
analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV
Weierstrass traveling wave solutions for dissipative Benjamin, Bona, and Mahony (BBM) equation
NASA Astrophysics Data System (ADS)
Mancas, Stefan C.; Spradlin, Greg; Khanal, Harihar
2013-08-01
In this paper the effect of a small dissipation on waves is included to find exact solutions to the modified Benjamin, Bona, and Mahony (BBM) equation by viscosity. Using Lyapunov functions and dynamical systems theory, we prove that when viscosity is added to the BBM equation, in certain regions there still exist bounded traveling wave solutions in the form of solitary waves, periodic, and elliptic functions. By using the canonical form of Abel equation, the polynomial Appell invariant makes the equation integrable in terms of Weierstrass ℘ functions. We will use a general formalism based on Ince's transformation to write the general solution of dissipative BBM in terms of ℘ functions, from which all the other known solutions can be obtained via simplifying assumptions. Using ODE (ordinary differential equations) analysis we show that the traveling wave speed is a bifurcation parameter that makes transition between different classes of waves.
Managing Disease Risks from Trade: Strategic Behavior with Many Choices and Price Effects.
Chitchumnong, Piyayut; Horan, Richard D
2018-03-16
An individual's infectious disease risks, and hence the individual's incentives for risk mitigation, may be influenced by others' risk management choices. If so, then there will be strategic interactions among individuals, whereby each makes his or her own risk management decisions based, at least in part, on the expected decisions of others. Prior work has shown that multiple equilibria could arise in this setting, with one equilibrium being a coordination failure in which individuals make too few investments in protection. However, these results are largely based on simplified models involving a single management choice and fixed prices that may influence risk management incentives. Relaxing these assumptions, we find strategic interactions influence, and are influenced by, choices involving multiple management options and market price effects. In particular, we find these features can reduce or eliminate concerns about multiple equilibria and coordination failure. This has important policy implications relative to simpler models.
DOT National Transportation Integrated Search
2005-05-01
This report provides an overview of polymer flammability from a material science perspective and describes currently accepted test methods to quantify burning behavior. Simplifying assumptions about the gas and condensed phase processes of flaming co...
NASA Astrophysics Data System (ADS)
Zlotnik, V. A.; Tartakovsky, D. M.
2017-12-01
The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2017-12-01
Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.
Spontaneously Broken Neutral Symmetry in an Ecological System
NASA Astrophysics Data System (ADS)
Borile, C.; Muñoz, M. A.; Azaele, S.; Banavar, Jayanth R.; Maritan, A.
2012-07-01
Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest)—regardless of the species they belong to—have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species, treating the dynamics in a species-symmetric manner. We demonstrate that dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.
Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M
2016-09-01
Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.
Telko, Martin J; Hickey, Anthony J
2007-10-01
Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.
International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1
1998-01-01
pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made
NASA Astrophysics Data System (ADS)
Madani, Kaveh
2016-04-01
Water management benefits from a suite of modelling tools and techniques that help simplifying and understanding the complexities involved in managing water resource systems. Early water management models were mainly concerned with optimizing a single objective, related to the design, operations or management of water resource systems (e.g. economic cost, hydroelectricity production, reliability of water deliveries). Significant improvements in methodologies, computational capacity, and data availability over the last decades have resulted in developing more complex water management models that can now incorporate multiple objectives, various uncertainties, and big data. These models provide an improved understanding of complex water resource systems and provide opportunities for making positive impacts. Nevertheless, there remains an alarming mismatch between the optimal solutions developed by these models and the decisions made by managers and stakeholders of water resource systems. Modelers continue to consider decision makers as irrational agents who fail to implement the optimal solutions developed by sophisticated and mathematically rigours water management models. On the other hand, decision makers and stakeholders accuse modelers of being idealist, lacking a perfect understanding of reality, and developing 'smart' solutions that are not practical (stable). In this talk I will have a closer look at the mismatch between the optimality and stability of solutions and argue that conventional water resources management models suffer inherently from a full-cooperation assumption. According to this assumption, water resources management decisions are based on group rationality where in practice decisions are often based on individual rationality, making the group's optimal solution unstable for individually rational decision makers. I discuss how game theory can be used as an appropriate framework for addressing the irrational "rationality assumption" of water resources management models and for better capturing the social aspects of decision making in water management systems with multiple stakeholders.
Fully Bayesian tests of neutrality using genealogical summary statistics.
Drummond, Alexei J; Suchard, Marc A
2008-10-31
Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.
Gottlieb, Jacqueline
2018-05-01
In natural behavior we actively gather information using attention and active sensing behaviors (such as shifts of gaze) to sample relevant cues. However, while attention and decision making are naturally coordinated, in the laboratory they have been dissociated. Attention is studied independently of the actions it serves. Conversely, decision theories make the simplifying assumption that the relevant information is given, and do not attempt to describe how the decision maker may learn and implement active sampling policies. In this paper I review recent studies that address questions of attentional learning, cue validity and information seeking in humans and non-human primates. These studies suggest that learning a sampling policy involves large scale interactions between networks of attention and valuation, which implement these policies based on reward maximization, uncertainty reduction and the intrinsic utility of cognitive states. I discuss the importance of using such paradigms for formalizing the role of attention, as well as devising more realistic theories of decision making that capture a broader range of empirical observations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics
2012-09-13
Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an
The risk of collapse in abandoned mine sites: the issue of data uncertainty
NASA Astrophysics Data System (ADS)
Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi
2016-04-01
Ground collapses over abandoned underground mines constitute a new environmental risk in the world. The high risk associated with subsurface voids, together with lack of knowledge of the geometric and geomechanical features of mining areas, makes abandoned underground mines one of the current challenges for countries with a long mining history. In this study, a stability analysis of Montevecchia marl mine is performed in order to validate a general approach that takes into account the poor local information and the variability of the input data. The collapse risk was evaluated through a numerical approach that, starting with some simplifying assumptions, is able to provide an overview of the collapse probability. The final results is an easy-accessible-transparent summary graph that shows the collapse probability. This approach may be useful for public administrators called upon to manage this environmental risk. The approach tries to simplify this complex problem in order to achieve a roughly risk assessment, but, since it relies on just a small amount of information, any final user should be aware that a comprehensive and detailed risk scenario can be generated only through more exhaustive investigations.
Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres
NASA Astrophysics Data System (ADS)
Cuomo, M.; dell'Isola, F.; Greco, L.
2016-06-01
Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, P.J.
A forthcoming revision to the R6 Leak-before-Break Assessment Procedure is briefly described. Practical application of the LbB concepts to safety-critical nuclear plant is illustrated by examples covering both low temperature and high temperature (>450{degrees}C) operating regimes. The examples highlight a number of issues which can make the development of a satisfactory LbB case problematic: for example, coping with highly loaded components, methodology assumptions and the definition of margins, the effect of crack closure owing to weld residual stresses, complex thermal stress fields or primary bending fields, the treatment of locally high stresses at crack intersections with free surfaces, the choicemore » of local limit load solution when predicting ligament breakthrough, and the scope of calculations required to support even a simplified LbB case for high temperature steam pipe-work systems.« less
Measuring Spatial Infiltration in Stormwater Control Measures: Results and Implications
This presentation will provide background information on research conducted by EPA-ORD on the use of soil moisture sensors in bioretention/bioinfiltration technologies to evaluate infiltration mechanisms and compares monitoring results to simplified modeling assumptions. A serie...
Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling
Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...
Waples, Robin S; Scribner, Kim; Moore, Jennifer; Draheim, Hope; Etter, Dwayne; Boersen, Mark
2018-04-14
The idealized concept of a population is integral to ecology, evolutionary biology, and natural resource management. To make analyses tractable, most models adopt simplifying assumptions, which almost inevitably are violated by real species in nature. Here we focus on both demographic and genetic estimates of effective population size per generation (Ne), the effective number of breeders per year (Nb), and Wright's neighborhood size (NS) for black bears (Ursus americanus) that are continuously distributed in the northern lower peninsula of Michigan, USA. We illustrate practical application of recently-developed methods to account for violations of two common, simplifying assumptions about populations: 1) reproduction occurs in discrete generations, and 2) mating occurs randomly among all individuals. We use a 9-year harvest dataset of >3300 individuals, together with genetic determination of 221 parent-offspring pairs, to estimate male and female vital rates, including age-specific survival, age-specific fecundity, and age-specific variance in fecundity (for which empirical data are rare). We find strong evidence for overdispersed variance in reproductive success of same-age individuals in both sexes, and we show that constraints on litter size have a strong influence on results. We also estimate that another life-history trait that is often ignored (skip breeding by females) has a relatively modest influence, reducing Nb by 9% and increasing Ne by 3%. We conclude that isolation by distance depresses genetic estimates of Nb, which implicitly assume a randomly-mating population. Estimated demographic NS (100, based on parent-offspring dispersal) was similar to genetic NS (85, based on regression of genetic distance and geographic distance), indicating that the >36,000 km2 study area includes about 4-5 black-bear neighborhoods. Results from this expansive data set provide important insight into effects of violating assumptions when estimating evolutionary parameters for long-lived, free-ranging species. In conjunction with recently-developed analytical methodology, the ready availability of non-lethal DNA sampling methods and the ability to rapidly and cheaply survey many thousands of molecular markers should facilitate eco-evolutionary studies like this for many more species in nature.
INTERNAL DOSE AND RESPONSE IN REAL-TIME.
Abstract: Rapid temporal fluctuations in exposure may occur in a number of situations such as accidents or other unexpected acute releases of airborne substances. Often risk assessments overlook temporal exposure patterns under simplifying assumptions such as the use of time-wei...
NASA Astrophysics Data System (ADS)
MECHEL, F. P.
2001-11-01
A plane wave is incident on a simply supported elastic plate covering a back volume; the arrangement is surrounded by a hard baffle wall. The plate may be porous with a flow friction resistance; the back volume may be filled either with air or with a porous material. The back volume may be bulk reacting (i.e., with sound propagation parallel to the plate) or locally reacting. Since this arrangement is of some importance in room acoustics, Cremer in his book about room acoustics [1] has presented an approximate analysis. However, Cremer's analysis uses a number of assumptions which make his solution, in his own estimate, unsuited for low frequencies, where, on the other hand, the arrangement mainly is applied. This paper presents a sound field description which uses modal analysis. It is applicable not only in the far field, but also near the absorber. Further, approximate solutions are derived, based on simplifying assumptions like Cremer has used. The modal analysis solution is of interest not only as a reference for approximations but also for practical applications, because the aspect of computing time becomes more and more unimportant (the 3D-plots presented below for the sound field were evaluated with modal analysis in about 6 s).
Impact buckling of thin bars in the elastic range for any end condition
NASA Technical Reports Server (NTRS)
Taub, Josef
1934-01-01
Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
Simplifying the complexity of resistance heterogeneity in metastasis
Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.
2014-01-01
The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-12-01
We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.
An approach to quantifying the efficiency of a Bayesian filter
USDA-ARS?s Scientific Manuscript database
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...
A Methodology for Developing Army Acquisition Strategies for an Uncertain Future
2007-01-01
manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are
MODELING NITROGEN-CARBON CYCLING AND OXYGEN CONSUMPTION IN BOTTOM SEDIMENTS
A model framework is presented for simulating nitrogen and carbon cycling at the sediment–water interface, and predicting oxygen consumption by oxidation reactions inside the sediments. Based on conservation of mass and invoking simplifying assumptions, a coupled system of diffus...
Deflection Shape Reconstructions of a Rotating Five-blade Helicopter Rotor from TLDV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fioretti, A.; Castellini, P.; Tomasini, E. P.
2010-05-28
Helicopters are aircraft machines which are subjected to high level of vibrations, mainly due to spinning rotors. These are made of two or more blades attached by hinges to a central hub, which can make the dynamic behaviour difficult to study. However, they share some common dynamic properties with the ones expected in bladed discs, thereby the analytical modelling of rotors can be performed using some assumptions as the ones adopted for the bladed discs. This paper presents results of a vibrations study performed on a scaled helicopter rotor model which was rotating at a fix rotational speed and excitedmore » by an air jet. A simplified analytical model of that rotor was also produced to help the identifications of the vibration patterns measured using a single point tracking-SLDV measurement method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifert, Till; Nachman, Benjamin
2015-02-20
A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifert, Till; Nachman, Benjamin
2015-04-01
A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the standard model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this ‘stealth stop’ scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. Duemore » to the different initial states, the size of the bias is slightly different between the LHC and the Tevatron. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples.« less
The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere
NASA Astrophysics Data System (ADS)
Chen, X.; Lin, S. J.; Harris, L.
2017-12-01
Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.
Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis
NASA Astrophysics Data System (ADS)
Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.
As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.
Barks, P M; Laird, R A
2016-04-01
Classic theories on the evolution of senescence make the simplifying assumption that all offspring are of equal quality, so that demographic senescence only manifests through declining rates of survival or fecundity. However, there is now evidence that, in addition to declining rates of survival and fecundity, many organisms are subject to age-related declines in the quality of offspring produced (i.e. parental age effects). Recent modelling approaches allow for the incorporation of parental age effects into classic demographic analyses, assuming that such effects are limited to a single generation. Does this 'single-generation' assumption hold? To find out, we conducted a laboratory study with the aquatic plant Lemna minor, a species for which parental age effects have been demonstrated previously. We compared the size and fitness of 423 laboratory-cultured plants (asexually derived ramets) representing various birth orders, and ancestral 'birth-order genealogies'. We found that offspring size and fitness both declined with increasing 'immediate' birth order (i.e. birth order with respect to the immediate parent), but only offspring size was affected by ancestral birth order. Thus, the assumption that parental age effects on offspring fitness are limited to a single generation does in fact hold for L. minor. This result will guide theorists aiming to refine and generalize modelling approaches that incorporate parental age effects into evolutionary theory on senescence. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS
Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...
Multi-phase CFD modeling of solid sorbent carbon capture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, E. M.; DeCroix, D.; Breault, R.
2013-07-01
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; DeCroix, David; Breault, Ronald W.
2013-07-30
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Risk-Screening Environmental Indicators (RSEI)
EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.
Dynamic behaviour of thin composite plates for different boundary conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com
2014-12-10
In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.
2015-01-01
Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.
Naïve and Robust: Class-Conditional Independence in Human Classification Learning
ERIC Educational Resources Information Center
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.
2018-01-01
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
Theoretical studies of solar lasers and converters
NASA Technical Reports Server (NTRS)
Heinbockel, John H.
1988-01-01
The previously constructed one dimensional model for the simulated operation of an iodine laser assumed that the perfluoroalkyl iodide gas n-C3F7I was incompressible. The present study removes this simplifying assumption and considers n-C3F7I as a compressible fluid.
NASA Technical Reports Server (NTRS)
Kubota, H.
1976-01-01
A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.
Quasi 3D modeling of water flow in vadose zone and groundwater
USDA-ARS?s Scientific Manuscript database
The complexity of subsurface flow systems calls for a variety of concepts leading to the multiplicity of simplified flow models. One habitual simplification is based on the assumption that lateral flow and transport in unsaturated zone are not significant unless the capillary fringe is involved. In ...
Scaling the Library Collection; A Simplified Method for Weighing the Variables
ERIC Educational Resources Information Center
Vagianos, Louis
1973-01-01
On the assumption that the physical properties of any information stock (book, etc.) offer the best foundation on which to develop satisfactory measurements for assessing library operations and developing library procedures, weight is suggested as the most useful variable for assessment and standardization. Advantages of this approach are…
Dualisms in Higher Education: A Critique of Their Influence and Effect
ERIC Educational Resources Information Center
Macfarlane, Bruce
2015-01-01
Dualisms pervade the language of higher education research providing an over-simplified roadmap to the field. However, the lazy logic of their popular appeal supports the perpetuation of erroneous and often outdated assumptions about the nature of modern higher education. This paper explores nine commonly occurring dualisms:…
A Comprehensive Real-World Distillation Experiment
ERIC Educational Resources Information Center
Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.
2015-01-01
Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…
Optimal weighting in fNL constraints from large scale structure in an idealised case
NASA Astrophysics Data System (ADS)
Slosar, Anže
2009-03-01
We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.
Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.
Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I
2007-12-01
A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
Relationship between population dynamics and the self-energy in driven non-equilibrium systems
Kemper, Alexander F.; Freericks, James K.
2016-05-13
We compare the decay rates of excited populations directly calculated within a Keldysh formalism to the equation of motion of the population itself for a Hubbard-Holstein model in two dimensions. While it is true that these two approaches must give the same answer, it is common to make a number of simplifying assumptions, within the differential equation for the populations, that allows one to interpret the decay in terms of hot electrons interacting with a phonon bath. Furthermore, we show how care must be taken to ensure an accurate treatment of the equation of motion for the populations due tomore » the fact that there are identities that require cancellations of terms that naively look like they contribute to the decay rates. In particular, the average time dependence of the Green's functions and self-energies plays a pivotal role in determining these decay rates.« less
An alternative Biot's displacement formulation for porous materials.
Dazel, Olivier; Brouard, Bruno; Depollier, Claude; Griffiths, Stéphane
2007-06-01
This paper proposes an alternative displacement formulation of Biot's linear model for poroelastic materials. Its advantage is a simplification of the formalism without making any additional assumptions. The main difference between the method proposed in this paper and the original one is the choice of the generalized coordinates. In the present approach, the generalized coordinates are chosen in order to simplify the expression of the strain energy, which is expressed as the sum of two decoupled terms. Hence, new equations of motion are obtained whose elastic forces are decoupled. The simplification of the formalism is extended to Biot and Willis thought experiments, and simpler expressions of the parameters of the three Biot waves are also provided. A rigorous derivation of equivalent and limp models is then proposed. It is finally shown that, for the particular case of sound-absorbing materials, additional simplifications of the formalism can be obtained.
Taking it to the streets: recording medical outreach data on personal digital assistants.
Buck, David S; Rochon, Donna; Turley, James P
2005-01-01
Carrying hundreds of patient files in a suitcase makes medical street outreach to the homeless clumsy and difficult. Healthcare for the Homeless--Houston (HHH) began a case study under the assumption that tracking patient information with a personal digital assistant (PDA) would greatly simplify the process. Equipping clinicians with custom-designed software loaded onto Palm V Handheld Computers (palmOne, Inc, Milpitas, CA), Healthcare for the Homeless--Houston assessed how this type of technology augmented medical care during street outreach to the homeless in a major metropolitan area. Preliminary evidence suggests that personal digital assistants free clinicians to focus on building relationships instead of recreating documentation during patient encounters. However, the limits of the PDA for storing and retrieving data made it impractical long-term. This outcome precipitated a new study to test the feasibility of tablet personal computers loaded with a custom-designed software application specific to the needs of homeless street patients.
Computational Analysis of Behavior.
Egnor, S E Roian; Branson, Kristin
2016-07-08
In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.
Wolfson, Julian; Henn, Lisa
2014-01-01
In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging.
2014-01-01
In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging. PMID:25342953
A simplified gross thrust computing technique for an afterburning turbofan engine
NASA Technical Reports Server (NTRS)
Hamer, M. J.; Kurtenbach, F. J.
1978-01-01
A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.
Medical Decision-Making Among Elderly People in Long Term Care.
ERIC Educational Resources Information Center
Tymchuk, Alexander J.; And Others
1988-01-01
Presented informed consent information on high and low risk medical procedures to elderly persons in long term care facility in standard, simplified, or storybook format. Comprehension was significantly better for simplified and storybook formats. Ratings of decision-making ability approximated comprehension test results. Comprehension test…
A control-volume method for analysis of unsteady thrust augmenting ejector flows
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1988-01-01
A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.
A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces
Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.
2014-01-01
A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Liu, Peide
2017-07-01
Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.
2016-11-01
There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.
NASA Astrophysics Data System (ADS)
Jusup, Marko; Iwami, Shingo; Podobnik, Boris; Stanley, H. Eugene
2015-12-01
Since the very inception of mathematical modeling in epidemiology, scientists exploited the simplicity ingrained in the assumption of a well-mixed population. For example, perhaps the earliest susceptible-infectious-recovered (SIR) model developed by L. Reed and W.H. Frost in the 1920s [1], included the well-mixed assumption such that any two individuals in the population could meet each other. The problem was that, unlike many other simplifying assumptions used in epidemiological modeling whose validity holds in one situation or the other, well-mixed populations are almost non-existent in reality because the nature of human socio-economic interactions is, for the most part, highly heterogeneous (e.g. [2-6]).
Quick and Easy Rate Equations for Multistep Reactions
ERIC Educational Resources Information Center
Savage, Phillip E.
2008-01-01
Students rarely see closed-form analytical rate equations derived from underlying chemical mechanisms that contain more than a few steps unless restrictive simplifying assumptions (e.g., existence of a rate-determining step) are made. Yet, work published decades ago allows closed-form analytical rate equations to be written quickly and easily for…
USDA-ARS?s Scientific Manuscript database
Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...
Solubility and Thermodynamics: An Introductory Experiment
NASA Astrophysics Data System (ADS)
Silberman, Robert G.
1996-05-01
This article describes a laboratory experiment suitable for high school or freshman chemistry students in which the solubility of potassium nitrate is determined at several different temperatures. The data collected is used to calculate the equilibrium constant, delta G, delta H, and delta S for dissolution reaction. The simplifying assumptions are noted in the article.
SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual
USDA-ARS?s Scientific Manuscript database
Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...
The Signal Importance of Noise
ERIC Educational Resources Information Center
Macy, Michael; Tsvetkova, Milena
2015-01-01
Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…
A survey of numerical models for wind prediction
NASA Technical Reports Server (NTRS)
Schonfeld, D.
1980-01-01
A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.
Distinguishing Identical Particles and the Correct Counting of States
ERIC Educational Resources Information Center
de la Torre, A. C.; Martin, H. O.
2009-01-01
It is shown that quantum systems of identical particles can be treated as different when they are in well-differentiated states. This simplifying assumption allows for the consideration of quantum systems isolated from the rest of the universe and justifies many intuitive statements about identical systems. However, it is shown that this…
NASA Astrophysics Data System (ADS)
Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.
2017-12-01
Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.
An evaluation of complementary relationship assumptions
NASA Astrophysics Data System (ADS)
Pettijohn, J. C.; Salvucci, G. D.
2004-12-01
Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Elaboration Preferences and Differences in Learning Proficiency.
ERIC Educational Resources Information Center
Rohwer, William D., Jr.; Levin, Joel R.
The major emphasis of this study is on the comparative validities of paired-associate learning tests and IQ tests in predicting reading achievement. The study engages in a brief review of earlier research in order to examine the validity of two assumptions--that the construction and/or the use of a tactic that simplifies a learning task is one of…
ERIC Educational Resources Information Center
Sternod, Latisha; French, Brian
2016-01-01
The Watson-Glaser™ II Critical Thinking Appraisal (Watson-Glaser II; Watson & Glaser, 2010) is a revised version of the "Watson-Glaser Critical Thinking Appraisal®" (Watson & Glaser, 1994). The Watson-Glaser II introduces a simplified model of critical thinking, consisting of three subdimensions: recognize assumptions, evaluate…
Selected mesostructure properties in loblolly pine from Arkansas plantations
David E. Kretschmann; Steven M. Cramer; Roderic Lakes; Troy Schmidt
2006-01-01
Design properties of wood are currently established at the macroscale, assuming wood to be a homogeneous orthotropic material. The resulting variability from the use of such a simplified assumption has been handled by designing with lower percentile values and applying a number of factors to account for the wide statistical variation in properties. With managed...
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1980-01-01
The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.
Experimental Methodology for Measuring Combustion and Injection-Coupled Responses
NASA Technical Reports Server (NTRS)
Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.
2006-01-01
A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.
Woodward, Alexander; Froese, Tom; Ikegami, Takashi
2015-02-01
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mathematical Modeling: Are Prior Experiences Important?
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…
20 CFR 404.1690 - Assumption when we make a finding of substantial failure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 404.1690 Section 404.1690 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD... responsibility for performing the disability determination function from the State agency, whether the assumption...
20 CFR 416.1090 - Assumption when we make a finding of substantial failure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 416.1090 Section 416.1090 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL... responsibility for performing the disability determination function from the State agency, whether the assumption...
Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols
Nam, Junghyun; Kim, Moonseong
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance
NASA Technical Reports Server (NTRS)
Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
ERIC Educational Resources Information Center
Schmidt, Matthew M.; Lin, Meng-Fen Grace; Paek, Seungoh; MacSuga-Gage, Ashley; Gage, Nicholas A.
2017-01-01
The worldwide explosion in popularity of mobile devices has created a dramatic increase in mobile software (apps) that are quick and easy to find and install, cheap, disposable, and usually single purpose. Hence, teachers need an equally streamlined and simplified decision-making process to help them identify educational apps--an approach that…
Separating intrinsic from extrinsic fluctuations in dynamic biological systems
Paulsson, Johan
2011-01-01
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172
Separating intrinsic from extrinsic fluctuations in dynamic biological systems.
Hilfinger, Andreas; Paulsson, Johan
2011-07-19
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks
2017-01-01
Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees. PMID:28545083
Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks.
Klinkenberg, Don; Backer, Jantien A; Didelot, Xavier; Colijn, Caroline; Wallinga, Jacco
2017-05-01
Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees.
An analysis of running skyline load path.
Ward W. Carson; Charles N. Mann
1971-01-01
This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...
Stratosphere circulation on tidally locked ExoEarths
NASA Astrophysics Data System (ADS)
Carone, L.; Keppens, R.; Decin, L.; Henning, Th.
2018-02-01
Stratosphere circulation is important to interpret abundances of photochemically produced compounds like ozone which we aim to observe to assess habitability of exoplanets. We thus investigate a tidally locked ExoEarth scenario for TRAPPIST-1b, TRAPPIST-1d, Proxima Centauri b and GJ 667 C f with a simplified 3D atmosphere model and for different stratospheric wind breaking assumptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... grouping rules of paragraph (c)(2)(iii) of this section. Separate charts are provided for ages 55, 60, and...) Simplified presentations permitted—(A) Grouping of certain optional forms. Two or more optional forms of... starting date, a reasonable assumption for the age of the participant's spouse, or, in the case of a...
A nonlinear theory for elastic plates with application to characterizing paper properties
M. W. Johnson; Thomas J. Urbanik
1984-03-01
A theory of thin plates which is physically as well as kinematically nonlinear is, developed and used to characterize elastic material behavior for arbitrary stretching and bending deformations. It is developed from a few clearly defined assumptions and uses a unique treatment of strain energy. An effective strain concept is introduced to simplify the theory to a...
[Simplified identification and filter device of carbon dioxide].
Mei, Xue-qin; Zhang, Yi-ping
2009-11-01
This paper presents the design and implementation ways of a simplified device to identify and filter carbon dioxide. The gas went through the test interface which had wet litmus paper before entering the abdominal cavity. Carbon dioxide dissolving in water turned acidic, making litmus paper change color to identify carbon dioxide, in order to avoid malpractice by connecting the wrong gas when making Endoscopic surgery.
NASA Astrophysics Data System (ADS)
Ferrara, Alessandro; Polverino, Pierpaolo; Pianese, Cesare
2018-06-01
This paper proposes an analytical model of the water content of the electrolyte of a Proton Exchange Membrane Fuel Cell. The model is designed by accounting for several simplifying assumptions, which make the model suitable for on-board/online water management applications, while ensuring a good accuracy of the considered phenomena, with respect to advanced numerical solutions. The achieved analytical solution, expressing electrolyte water content, is compared with that obtained by means of a complex numerical approach, used to solve the same mathematical problem. The achieved results show that the mean error is below 5% for electrodes water content values ranging from 2 to 15 (given as boundary conditions), and it does not overcome 0.26% for electrodes water content above 5. These results prove the capability of the solution to correctly model electrolyte water content at any operating condition, aiming at embodiment into more complex frameworks (e.g., cell or stack models), related to fuel cell simulation, monitoring, control, diagnosis and prognosis.
Wu, Jiang; Li, Jia; Xu, Zhenming
2009-08-15
Electrostatic separation presents an effective and environmentally friendly way for recycling metals and nonmetals from ground waste electrical and electronic equipment (WEEE). For this process, the trajectory of conductive particle is significant and some models have been established. However, the results of previous researches are limited by some simplifying assumptions and lead to a notable discrepancy between the model prediction and the experimental results. In the present research, a roll-type corona-electrostatic separator and ground printed circuit board (PCB) wastes were used to investigate the trajectory of the conductive particle. Two factors, the air drag force and the different charging situation, were introduced into the improved model. Their effects were analyzed and an improved model for the theoretical trajectory of conductive particle was established. Compared with the previous one, the improved model shows a good agreement with the experimental results. It provides a positive guidance for designing of separator and makes a progress for recycling the metals and nonmetals from WEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinney, E.L.; Caldwell, J.W.
1990-07-01
Whereas the total mortality rate for sarcoidosis is 0.2 per 100,000, the prognosis, when the heart is involved, is very much worse. The authors used the difference in mortality rate to infer whether thallium 201 myocardial perfusion scan abnormalities correspond to myocardial sarcoid by making the simplifying assumption that if they do, then patients with abnormal scans will be found to have a death rate similar to patients with sarcoid heart disease. The authors therefore analyzed complete survival data on 52 sarcoid patients without cardiac symptoms an average of eighty-nine months after they had been scanned as part of amore » protocol. By use of survival analysis (the Cox proportional hazards model), the only variable that was significantly associated with survival was age. The patients' scan pattern, treatment status, gender, and race were not significantly related to survival. The authors conclude that thallium myocardial perfusion scans cannot reliably be used to diagnose sarcoid heart disease in sarcoid patients without cardiac symptoms.« less
Simplifying the representation of complex free-energy landscapes using sketch-map
Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele
2011-01-01
A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167
A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.
2014-01-01
Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.
Practical modeling approaches for geological storage of carbon dioxide.
Celia, Michael A; Nordbotten, Jan M
2009-01-01
The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.
NASA Astrophysics Data System (ADS)
Kelbert, A.; Egbert, G. D.; Sun, J.
2011-12-01
Poleward of 45-50 degrees (geomagnetic) observatory data are influenced significantly by auroral ionospheric current systems, invalidating the simplifying zonal dipole source assumption traditionally used for long period (T > 2 days) geomagnetic induction studies. Previous efforts to use these data to obtain the global electrical conductivity distribution in Earth's mantle have omitted high-latitude sites (further thinning an already sparse dataset) and/or corrected the affected transfer functions using a highly simplified model of auroral source currents. Although these strategies are partly effective, there remain clear suggestions of source contamination in most recent 3D inverse solutions - specifically, bands of conductive features are found near auroral latitudes. We report on a new approach to this problem, based on adjusting both external field structure and 3D Earth conductivity to fit observatory data. As an initial step towards full joint inversion we are using a two step procedure. In the first stage, we adopt a simplified conductivity model, with a thin-sheet of variable conductance (to represent the oceans) overlying a 1D Earth, to invert observed magnetic fields for external source spatial structure. Input data for this inversion are obtained from frequency domain principal components (PC) analysis of geomagnetic observatory hourly mean values. To make this (essentially linear) inverse problem well-posed we regularize using covariances for source field structure that are consistent with well-established properties of auroral ionospheric (and magnetospheric) current systems, and basic physics of the EM fields. In the second stage, we use a 3D finite difference inversion code, with source fields estimated from the first stage, to further fit the observatory PC modes. We incorporate higher latitude data into the inversion, and maximize the amount of available information by directly inverting the magnetic field components of the PC modes, instead of transfer functions such as C-responses used previously. Recent improvements in accuracy and speed of the forward and inverse finite difference codes (a secondary field formulation and parallelization over frequencies) allow us to use finer computational grid for inversion, and thus to model finer scale features, making full use of the expanded data set. Overall, our approach presents an improvement over earlier observatory data interpretation techniques, making better use of the available data, and allowing to explore the trade-offs between complications in source structure, and heterogeneities in mantle conductivity. We will also report on progress towards applying the same approach to simultaneous source/conductivity inversion of shorter period observatory data, focusing especially on the daily variation band.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter; Barbose, Galen L.; Stoll, Brady
Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities; forecasting capabilities can be improved, but generally at a cost. This paper informs this decision-space by using a suite of models to explore the capacity expansion and operation of the Western Interconnection over a 15-year period across a wide range of DPV growth rates and misforecast severities. The system costs under a misforecast are compared against the costs under a perfect forecast, to quantify the costs of misforecasting. Using a simplified probabilistic method applied to these modeling results, an analyst can make a first-ordermore » estimate of the financial benefit of improving a utility’s forecasting capabilities, and thus be better informed about whether to make such an investment. For example, under our base assumptions, a utility with 10 TWh per year of retail electric sales who initially estimates that DPV growth could range from 2% to 7.5% of total generation over the next 15 years could expect total present-value savings of approximately $4 million if they could reduce the severity of misforecasting to within ±25%. Utility resource planners can compare those savings against the costs needed to achieve that level of precision, to guide their decision on whether to make an investment in tools or resources.« less
Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo
Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less
Short-cut Methods versus Rigorous Methods for Performance-evaluation of Distillation Configurations
Ramapriya, Gautham Madenoor; Selvarajah, Ajiththaa; Jimenez Cucaita, Luis Eduardo; ...
2018-05-17
Here, this study demonstrates the efficacy of a short-cut method such as the Global Minimization Algorithm (GMA), that uses assumptions of ideal mixtures, constant molar overflow (CMO) and pinched columns, in pruning the search-space of distillation column configurations for zeotropic multicomponent separation, to provide a small subset of attractive configurations with low minimum heat duties. The short-cut method, due to its simplifying assumptions, is computationally efficient, yet reliable in identifying the small subset of useful configurations for further detailed process evaluation. This two-tier approach allows expedient search of the configuration space containing hundreds to thousands of candidate configurations for amore » given application.« less
Hypotheses of calculation of the water flow rate evaporated in a wet cooling tower
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourillot, C.
1983-08-01
The method developed by Poppe at the University of Hannover to calculate the thermal performance of a wet cooling tower fill is presented. The formulation of Poppe is then validated using full-scale test data from a wet cooling tower at the power station at Neurath, Federal Republic of Germany. It is shown that the Poppe method predicts the evaporated water flow rate almost perfectly and the condensate content of the warm air with good accuracy over a wide range of ambient conditions. The simplifying assumptions of the Merkel theory are discussed, and the errors linked to these assumptions are systematicallymore » described, then illustrated with the test data.« less
Data Transmission Signal Design and Analysis
NASA Technical Reports Server (NTRS)
Moore, J. D.
1972-01-01
The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.
Magisterial Decision-Making: How Fifteen Stipendiary Magistrates Make Court-Room Decisions.
ERIC Educational Resources Information Center
Lawrence, Jeanette A.; Browne, Myra A.
This report describes the cognitive procedures which a group of Australian stipendiary utilize in court to make decisions. The study was based on an assumption that magistrates represent a group of professionals whose work involves making decisions of human significance, and on an assumption that the magistrates' own perceptions of their ways of…
Amplified effect of mild plastic anisotropy on residual stress and strain anisotropy
Prime, Michael B.
2017-07-01
Axisymmetric indentation of a geometrically axisymmetric disk produced residual stresses by non-uniform plastic deformation. The 2024 aluminum plate used to make the disk exhibited mild plastic anisotropy with about 10% lower strength in the transverse direction compared to the rolling and through-thickness directions. Residual stresses and strains in the disk were measured with neutron diffraction, slitting, the contour method, x-ray diffraction and hole drilling. Surprisingly, the residual-stress anisotropy measured in the disk was about 40%, the residual-strain anisotropy was an impressive 100%, and the residual stresses were higher in the weaker direction. The high residual stress anisotropy relative to themore » mild plastic anisotropy and the direction of the highest stress are explained by considering the mechanics of indentation: constraint on deformation provided by the material surrounding the indentation and preferential deformation in the most compliant direction for incremental deformation. By contrast, the much larger anisotropy in residual strain compared to that in residual stress is independent of the fabrication process and is instead explained by considering Hookean elasticity. For Poisson's ratio of 1/3, the relationship simplifies to the residual strain anisotropy equaling the square of the residual stress anisotropy, which matches the observed results (2 ≈ 1.4^2). Furthermore, a lesson from this study is that to accurately predict residual stresses and strains, one must be wary of seemingly reasonable simplifying assumptions such as neglecting mild plastic anisotropy.« less
Amplified effect of mild plastic anisotropy on residual stress and strain anisotropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prime, Michael B.
Axisymmetric indentation of a geometrically axisymmetric disk produced residual stresses by non-uniform plastic deformation. The 2024 aluminum plate used to make the disk exhibited mild plastic anisotropy with about 10% lower strength in the transverse direction compared to the rolling and through-thickness directions. Residual stresses and strains in the disk were measured with neutron diffraction, slitting, the contour method, x-ray diffraction and hole drilling. Surprisingly, the residual-stress anisotropy measured in the disk was about 40%, the residual-strain anisotropy was an impressive 100%, and the residual stresses were higher in the weaker direction. The high residual stress anisotropy relative to themore » mild plastic anisotropy and the direction of the highest stress are explained by considering the mechanics of indentation: constraint on deformation provided by the material surrounding the indentation and preferential deformation in the most compliant direction for incremental deformation. By contrast, the much larger anisotropy in residual strain compared to that in residual stress is independent of the fabrication process and is instead explained by considering Hookean elasticity. For Poisson's ratio of 1/3, the relationship simplifies to the residual strain anisotropy equaling the square of the residual stress anisotropy, which matches the observed results (2 ≈ 1.4^2). Furthermore, a lesson from this study is that to accurately predict residual stresses and strains, one must be wary of seemingly reasonable simplifying assumptions such as neglecting mild plastic anisotropy.« less
Marginal Loss Calculations for the DCOPF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldridge, Brent; O'Neill, Richard P.; Castillo, Andrea R.
2016-12-05
The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today,more » but will include sufficient detail to demonstrate a few points with regard to the handling of losses.« less
NASA Astrophysics Data System (ADS)
Shiota, Koki; Kai, Kazuho; Nagaoka, Shiro; Tsuji, Takuto; Wakahara, Akihiro; Rusop, Mohamad
2016-07-01
The educational method which is including designing, making, and evaluating actual semiconductor devices with learning the theory is one of the best way to obtain the fundamental understanding of the device physics and to cultivate the ability to make unique ideas using the knowledge in the semiconductor device. In this paper, the simplified Boron thermal diffusion process using Sol-Gel material under normal air environment was proposed based on simple hypothesis and the feasibility of the reproducibility and reliability were investigated to simplify the diffusion process for making the educational devices, such as p-n junction, bipolar and pMOS devices. As the result, this method was successfully achieved making p+ region on the surface of the n-type silicon substrates with good reproducibility. And good rectification property of the p-n junctions was obtained successfully. This result indicates that there is a possibility to apply on the process making pMOS or bipolar transistors. It suggests that there is a variety of the possibility of the applications in the educational field to foster an imagination of new devices.
Flux Jacobian Matrices For Equilibrium Real Gases
NASA Technical Reports Server (NTRS)
Vinokur, Marcel
1990-01-01
Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.
Evolutionary image simplification for lung nodule classification with convolutional neural networks.
Lückehe, Daniel; von Voigt, Gabriele
2018-05-29
Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
Evaluation of rate law approximations in bottom-up kinetic models of metabolism.
Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O
2016-06-06
The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches.
Photographic and drafting techniques simplify method of producing engineering drawings
NASA Technical Reports Server (NTRS)
Provisor, H.
1968-01-01
Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.
The impact of management science on political decision making
NASA Technical Reports Server (NTRS)
White, M. J.
1971-01-01
The possible impact on public policy and organizational decision making of operations research/management science (OR/MS) is discussed. Criticisms based on the assumption that OR/MS will have influence on decision making and criticisms based on the assumption that it will have no influence are described. New directions in the analysis of analysis and in thinking about policy making are also considered.
A practical iterative PID tuning method for mechanical systems using parameter chart
NASA Astrophysics Data System (ADS)
Kang, M.; Cheong, J.; Do, H. M.; Son, Y.; Niculescu, S.-I.
2017-10-01
In this paper, we propose a method of iterative proportional-integral-derivative parameter tuning for mechanical systems that possibly possess hidden mechanical resonances, using a parameter chart which visualises the closed-loop characteristics in a 2D parameter space. We employ a hypothetical assumption that the considered mechanical systems have their upper limit of the derivative feedback gain, from which the feasible region in the parameter chart becomes fairly reduced and thus the gain selection can be extremely simplified. Then, a two-directional parameter search is carried out within the feasible region in order to find the best set of parameters. Experimental results show the validity of the assumption used and the proposed parameter tuning method.
Extended Analytic Device Optimization Employing Asymptotic Expansion
NASA Technical Reports Server (NTRS)
Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred
2013-01-01
Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.
The US Forest Service Framework for Climate Adaptation (Invited)
NASA Astrophysics Data System (ADS)
Cleaves, D.
2013-12-01
Public lands are changing in response to climate change and related stressors such that resilience-based management plans that integrate climate-smart adaptation are needed. The goal of these plans is to facilitate land managers' consideration of a range of potential futures while simplifying the complex array of choices and assumptions in a rigorous, defensible manner. The foundation for climate response has been built into recent Forest Service policies, guidance, and strategies like the climate change Roadmap and Scorecard; 2012 Planning Rule; Cohesive Wildland Fire Management strategy; and Inventory, Monitoring & Assessment strategy. This has driven the need for information that is relevant, timely, and accessible to support vulnerability assessments and risk management to aid in designing and choosing alternatives and ranking actions. Managers must also consider carbon and greenhouse gas implications as well as understand the nature and level of uncertainties. The major adjustments that need to be made involve: improving risk-based decision making and working with predictive models and information; evaluating underlying assumptions against new realities and possibilities being revealed by climate science; integrating carbon cycle science and a new ethic of carbon stewardship into management practices; and preparing systems for inevitable changes to ameliorate negative effects, capture opportunities, or accept different and perhaps novel ecosystem configurations. We need to avoid waiting for complete science that never arrives and take actions that blend science and experience to boost learning, reduce costs and irreversible losses, and buy lead time.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
NASA Astrophysics Data System (ADS)
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Consistency tests for the extraction of the Boer-Mulders and Sivers functions
NASA Astrophysics Data System (ADS)
Christova, E.; Leader, E.; Stoilov, M.
2018-03-01
At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiota, Koki, E-mail: a14510@sr.kagawa-nct.ac.jp; Kai, Kazuho; Nagaoka, Shiro, E-mail: nagaoka@es.kagawa-nct.ac.jp
The educational method which is including designing, making, and evaluating actual semiconductor devices with learning the theory is one of the best way to obtain the fundamental understanding of the device physics and to cultivate the ability to make unique ideas using the knowledge in the semiconductor device. In this paper, the simplified Boron thermal diffusion process using Sol-Gel material under normal air environment was proposed based on simple hypothesis and the feasibility of the reproducibility and reliability were investigated to simplify the diffusion process for making the educational devices, such as p-n junction, bipolar and pMOS devices. As themore » result, this method was successfully achieved making p+ region on the surface of the n-type silicon substrates with good reproducibility. And good rectification property of the p-n junctions was obtained successfully. This result indicates that there is a possibility to apply on the process making pMOS or bipolar transistors. It suggests that there is a variety of the possibility of the applications in the educational field to foster an imagination of new devices.« less
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
Khader, Patrick H; Pachur, Thorsten; Meier, Stefanie; Bien, Siegfried; Jost, Kerstin; Rösler, Frank
2011-11-01
Many of our daily decisions are memory based, that is, the attribute information about the decision alternatives has to be recalled. Behavioral studies suggest that for such decisions we often use simple strategies (heuristics) that rely on controlled and limited information search. It is assumed that these heuristics simplify decision-making by activating long-term memory representations of only those attributes that are necessary for the decision. However, from behavioral studies alone, it is unclear whether using heuristics is indeed associated with limited memory search. The present study tested this assumption by monitoring the activation of specific long-term-memory representations with fMRI while participants made memory-based decisions using the "take-the-best" heuristic. For different decision trials, different numbers and types of information had to be retrieved and processed. The attributes consisted of visual information known to be represented in different parts of the posterior cortex. We found that the amount of information required for a decision was mirrored by a parametric activation of the dorsolateral PFC. Such a parametric pattern was also observed in all posterior areas, suggesting that activation was not limited to those attributes required for a decision. However, the posterior increases were systematically modulated by the relative importance of the information for making a decision. These findings suggest that memory-based decision-making is mediated by the dorsolateral PFC, which selectively controls posterior storage areas. In addition, the systematic modulations of the posterior activations indicate a selective boosting of activation of decision-relevant attributes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk
We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less
Understanding the LIGO GW150914 event
NASA Astrophysics Data System (ADS)
Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao
2016-08-01
We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.
Some Basic Aspects of Magnetohydrodynamic Boundary-Layer Flows
NASA Technical Reports Server (NTRS)
Hess, Robert V.
1959-01-01
An appraisal is made of existing solutions of magnetohydrodynamic boundary-layer equations for stagnation flow and flat-plate flow, and some new solutions are given. Since an exact solution of the equations of magnetohydrodynamics requires complicated simultaneous treatment of the equations of fluid flow and of electromagnetism, certain simplifying assumptions are generally introduced. The full implications of these assumptions have not been brought out properly in several recent papers. It is shown in the present report that for the particular law of deformation which the magnetic lines are assumed to follow in these papers a magnet situated inside the missile nose would not be able to take up any drag forces; to do so it would have to be placed in the flow away from the nose. It is also shown that for the assumption that potential flow is maintained outside the boundary layer, the deformation of the magnetic lines is restricted to small values. The literature contains serious disagreements with regard to reductions in heat-transfer rates due to magnetic action at the nose of a missile, and these disagreements are shown to be mainly due to different interpretations of reentry conditions rather than more complicated effects. In the present paper the magnetohydrodynamic boundary-layer equation is also expressed in a simple form that is especially convenient for physical interpretation. This is done by adapting methods to magnetic forces which in the past have been used for forces due to gravitational or centrifugal action. The simplified approach is used to develop some new solutions of boundary-layer flow and to reinterpret certain solutions existing in the literature. An asymptotic boundary-layer solution representing a fixed velocity profile and shear is found. Special emphasis is put on estimating skin friction and heat-transfer rates.
When life imitates art: surrogate decision making at the end of life.
Shapiro, Susan P
2007-01-01
The privileging of the substituted judgment standard as the gold standard for surrogate decision making in law and bioethics has constrained the research agenda in end-of-life decision making. The empirical literature is inundated with a plethora of "Newlywed Game" designs, in which potential patients and potential surrogates respond to hypothetical scenarios to see how often they "get it right." The preoccupation with determining the capacity of surrogates to accurately reproduce the judgments of another makes a number of assumptions that blind scholars to the variables central to understanding how surrogates actually make medical decisions on behalf of another. These assumptions include that patient preferences are knowable, surrogates have adequate and accurate information, time stands still, patients get the surrogates they want, patients want and surrogates utilize substituted judgment criteria, and surrogates are disinterested. This article examines these assumptions and considers the challenges of designing research that makes them problematic.
Philosophy of Technology Assumptions in Educational Technology Leadership
ERIC Educational Resources Information Center
Webster, Mark David
2017-01-01
A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Zhang, Hong-yu
2018-03-01
Induced Choquet integral is a powerful tool to deal with imprecise or uncertain nature. This study proposes a combination process of the induced Choquet integral and neutrosophic information. We first give the operational properties of simplified neutrosophic numbers (SNNs). Then, we develop some new information aggregation operators, including an induced simplified neutrosophic correlated averaging (I-SNCA) operator and an induced simplified neutrosophic correlated geometric (I-SNCG) operator. These operators not only consider the importance of elements or their ordered positions, but also take into account the interactions phenomena among decision criteria or their ordered positions under multiple decision-makers. Moreover, we present a detailed analysis of I-SNCA and I-SNCG operators, including the properties of idempotency, commutativity and monotonicity, and study the relationships among the proposed operators and existing simplified neutrosophic aggregation operators. In order to handle the multi-criteria group decision-making (MCGDM) situations where the weights of criteria and decision-makers usually correlative and the criterion values are considered as SNNs, an approach is established based on I-SNCA operator. Finally, a numerical example is presented to demonstrate the proposed approach and to verify its effectiveness and practicality.
How to Decide on Modeling Details: Risk and Benefit Assessment.
Özilgen, Mustafa
Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.
NASA Astrophysics Data System (ADS)
Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander
2017-04-01
In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that the simplified modeling approach using PWNP as a driving factor for the evaluation of N losses from drained agricultural catchments gave satisfactory results and we can propose this approach for a wider use.
Sampling Assumptions in Inductive Generalization
ERIC Educational Resources Information Center
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
24 CFR 58.4 - Assumption authority.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate that...
The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.
Ene, Florentina; Delassus, Patrick; Morris, Liam
2014-08-01
The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.
Approximations of Two-Attribute Utility Functions
1976-09-01
preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory
Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models
Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.
2007-01-01
each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.
Prediction of the turbulent wake with second-order closure
NASA Technical Reports Server (NTRS)
Taulbee, D. B.; Lumley, J. L.
1981-01-01
A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.
Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John
2014-07-01
Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.
Mayo clinic NLP system for patient smoking status identification.
Savova, Guergana K; Ogren, Philip V; Duffy, Patrick H; Buntrock, James D; Chute, Christopher G
2008-01-01
This article describes our system entry for the 2006 I2B2 contest "Challenges in Natural Language Processing for Clinical Data" for the task of identifying the smoking status of patients. Our system makes the simplifying assumption that patient-level smoking status determination can be achieved by accurately classifying individual sentences from a patient's record. We created our system with reusable text analysis components built on the Unstructured Information Management Architecture and Weka. This reuse of code minimized the development effort related specifically to our smoking status classifier. We report precision, recall, F-score, and 95% exact confidence intervals for each metric. Recasting the classification task for the sentence level and reusing code from other text analysis projects allowed us to quickly build a classification system that performs with a system F-score of 92.64 based on held-out data tests and of 85.57 on the formal evaluation data. Our general medical natural language engine is easily adaptable to a real-world medical informatics application. Some of the limitations as applied to the use-case are negation detection and temporal resolution.
NASA Astrophysics Data System (ADS)
Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric
2007-02-01
We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-03-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
The effect of small-wave modulation on the electromagnetic bias
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Kim, Yunjin; Martin, Jan M.
1992-01-01
The effect of the modulation of small ocean waves by large waves on the physical mechanism of the EM bias is examined by conducting a numerical scattering experiment which does not assume the applicability of geometric optics. The modulation effect of the large waves on the small waves is modeled using the principle of conservation of wave action and includes the modulation of gravity-capillary waves. The frequency dependence and magnitude of the EM bias is examined for a simplified ocean spectral model as a function of wind speed. These calculations make it possible to assess the validity of previous assumptions made in the theory of the EM bias, with respect to both scattering and hydrodynamic effects. It is found that the geometric optics approximation is inadequate for predictions of the EM bias at typical radar altimeter frequencies, while the improved scattering calculations provide a frequency dependence of the EM bias which is in qualitative agreement with observation. For typical wind speeds, the EM bias contribution due to small-wave modulation is of the same order as that due to modulation by the nonlinearities of the large-scale waves.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-06-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.
2015-01-01
Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on several real-world problems. Two assumptions are frequently made to simplify the modeling of an interplanetary high-thrust trajectory during the preliminary design phase. The first assumption is that because the available thrust is high, any maneuvers performed by the spacecraft can be modeled as discrete changes in velocity. This assumption removes the need to integrate the equations of motion governing the motion of a spacecraft under thrust and allows the change in velocity to be modeled as an impulse and the expenditure of propellant to be modeled using the time-independent solution to Tsiolkovsky's rocket equation [1]. The second assumption is that the spacecraft moves primarily under the influence of the central body, i.e. the sun, and all other perturbing forces may be neglected in preliminary design. The path of the spacecraft may then be modeled as a series of conic sections. When a spacecraft performs a close approach to a planet, the central body switches from the sun to that planet and the trajectory is modeled as a hyperbola with respect to the planet. This is known as the method of patched conics. The impulsive and patched-conic assumptions significantly simplify the preliminary design problem.
Does phenomenological kinetics provide an adequate description of heterogeneous catalytic reactions?
Temel, Burcin; Meskine, Hakim; Reuter, Karsten; Scheffler, Matthias; Metiu, Horia
2007-05-28
Phenomenological kinetics (PK) is widely used in the study of the reaction rates in heterogeneous catalysis, and it is an important aid in reactor design. PK makes simplifying assumptions: It neglects the role of fluctuations, assumes that there is no correlation between the locations of the reactants on the surface, and considers the reacting mixture to be an ideal solution. In this article we test to what extent these assumptions damage the theory. In practice the PK rate equations are used by adjusting the rate constants to fit the results of the experiments. However, there are numerous examples where a mechanism fitted the data and was shown later to be erroneous or where two mutually exclusive mechanisms fitted well the same set of data. Because of this, we compare the PK equations to "computer experiments" that use kinetic Monte Carlo (kMC) simulations. Unlike in real experiments, in kMC the structure of the surface, the reaction mechanism, and the rate constants are known. Therefore, any discrepancy between PK and kMC must be attributed to an intrinsic failure of PK. We find that the results obtained by solving the PK equations and those obtained from kMC, while using the same rate constants and the same reactions, do not agree. Moreover, when we vary the rate constants in the PK model to fit the turnover frequencies produced by kMC, we find that the fit is not adequate and that the rate constants that give the best fit are very different from the rate constants used in kMC. The discrepancy between PK and kMC for the model of CO oxidation used here is surprising since the kMC model contains no lateral interactions that would make the coverage of the reactants spatially inhomogeneous. Nevertheless, such inhomogeneities are created by the interplay between the rate of adsorption, of desorption, and of vacancy creation by the chemical reactions.
NASA Astrophysics Data System (ADS)
Viswanathan, Sasi Prabhakaran
Design, dynamics, control and implementation of a novel spacecraft attitude control actuator called the "Adaptive Singularity-free Control Moment Gyroscope" (ASCMG) is presented in this dissertation. In order to construct a comprehensive attitude dynamics model of a spacecraft with internal actuators, the dynamics of a spacecraft with an ASCMG, is obtained in the framework of geometric mechanics using the principles of variational mechanics. The resulting dynamics is general and complete model, as it relaxes the simplifying assumptions made in prior literature on Control Moment Gyroscopes (CMGs) and it also addresses the adaptive parameters in the dynamics formulation. The simplifying assumptions include perfect axisymmetry of the rotor and gimbal structures, perfect alignment of the centers of mass of the gimbal and the rotor etc. These set of simplifying assumptions imposed on the design and dynamics of CMGs leads to adverse effects on their performance and results in high manufacturing cost. The dynamics so obtained shows the complex nonlinear coupling between the internal degrees of freedom associated with an ASCMG and the spacecraft bus's attitude motion. By default, the general ASCMG cluster can function as a Variable Speed Control Moment Gyroscope, and reduced to function in CMG mode by spinning the rotor at constant speed, and it is shown that even when operated in CMG mode, the cluster can be free from kinematic singularities. This dynamics model is then extended to include the effects of multiple ASCMGs placed in the spacecraft bus, and sufficient conditions for non-singular ASCMG cluster configurations are obtained to operate the cluster both in VSCMG and CMG modes. The general dynamics model of the ASCMG is then reduced to that of conventional VSCMGs and CMGs by imposing the standard set of simplifying assumptions used in prior literature. The adverse effects of the simplifying assumptions that lead to the complexities in conventional CMG design, and how they lead to CMG singularities, are described. General ideas on control of the angular momentum of the spacecraft using changes in the momentum variables of a finite number of ASCMGs, are provided. Control schemes for agile and precise attitude maneuvers using ASCMG cluster in the absence of external torques and when the total angular momentum of the spacecraft is zero, is presented for both constant speed and variable speed modes. A Geometric Variational Integrator (GVI) that preserves the geometry of the state space and the conserved norm of the total angular momentum is constructed for numerical simulation and microcontroller implementation of the control scheme. The GVI is obtained by discretizing the Lagrangian of the rnultibody systems, in which the rigid body attitude is globally represented on the Lie group of rigid body rotations. Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on commercial smartphones and a bare minimum hardware prototype of an ASCMG using low cost COTS components is also described. A lightweight, dynamics model-free Variational Attitude Estimator (VAE) suitable for smartphone implementation is employed for attitude determination and the attitude control is performed by ASCMG actuators. The VAE scheme presented here is implemented and validated onboard an Unmanned Aerial Vehicle (UAV) platform and the real time performance is analyzed. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The mechatronics realization of the attitude determination through variational attitude estimation scheme and control implementation using ASCMG actuators are presented here. Experimental results of the attitude estimation (filtering) scheme using smartphone sensors as an Inertial Measurement Unit (IMU) on the Hardware In the Loop (HIL) simulator testbed are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at New Mexico State University, demonstrate the performance of this estimation scheme with the noisy raw data from the smartphone sensors. Keywords: Spacecraft, momentum exchange devices, control moment gyroscope, variational mechanics, geometric mechanics, variational integrators, attitude determination, attitude control, ADCS, estimation, ASCMG, VSCMG, cubesat, mechatronics, smartphone, Android, MEMS sensor, embedded programming, microcontroller, brushless DC drives, HIL simulation.
Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Harding, D. D.
1977-01-01
The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.
Fitness extraction and the conceptual foundations of political biology.
Boari, Mircea
2005-01-01
In well known formulations, political science, classical and neoclassical economics, and political economy have recognized as foundational a human impulse toward self-preservation. To employ this concept, modern social-sciences theorists have made simplifying assumptions about human nature and have then built elaborately upon their more incisive simplifications. Advances in biology, including advances in evolutionary theory, notably inclusive-fitness theory, have for decades now encouraged the reconsideration of such assumptions and, more ambitiously, the reconciliation of the social and life sciences. I ask if this reconciliation is feasible and test a path to the unification of politics and biology, called here "political biology." Two new notions, "fitness extraction" and "fitness exchange," are defined, then differentiated from each other, and lastly contrasted to cooperative gaming, the putative essential element of economics.
HZETRN: A heavy ion/nucleon transport code for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.
1991-01-01
The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.
Characterizing dark matter at the LHC in Drell-Yan events
NASA Astrophysics Data System (ADS)
Capdevilla, Rodolfo M.; Delgado, Antonio; Martin, Adam; Raj, Nirmal
2018-02-01
Spectral features in LHC dileptonic events may signal radiative corrections coming from new degrees of freedom, notably dark matter and mediators. Using simplified models, and under a set of simplifying assumptions, we show how these features can reveal the fundamental properties of the dark sector, such as self-conjugation, spin and mass of dark matter, and the quantum numbers of the mediator. Distributions of both the invariant mass mℓℓ and the Collins-Soper scattering angle cos θCS are studied to pinpoint these properties. We derive constraints on the models from LHC measurements of mℓℓ and cos θCS, which are competitive with direct detection and jets+MET searches. We find that in certain scenarios the cos θCS spectrum provides the strongest bounds, underlining the importance of scattering angle measurements for nonresonant new physics.
ERIC Educational Resources Information Center
Shockley-Zalabak, Pamela
A study of decision making processes and communication rules, in a corporate setting undergoing change as a result of organizational ineffectiveness, examined whether (1) decisions about formal communication reporting systems were linked to management assumptions about technical creativity/effectiveness, (2) assumptions about…
Making Predictions about Chemical Reactivity: Assumptions and Heuristics
ERIC Educational Resources Information Center
Maeyer, Jenine; Talanquer, Vicente
2013-01-01
Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
ERIC Educational Resources Information Center
Kruger-Ross, Matthew J.; Holcomb, Lori B.
2012-01-01
The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
Understanding young stars - A history
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stahler, S.W.
1988-12-01
The history of pre-main-sequence theory is briefly reviewed. The paper of Henyey et al. (1955) is seen as an important transitional work, one which abandoned previous simplifying assumptions yet failed to incorporate newer insights into the surface structure of late-type stars. The subsequent work of Hayashi and his contemporaries is outlined, with an emphasis on the underlying physical principles. Finally, the recent impact of protostar theory is discussed, and speculations are offered on future developments. 56 references.
Investigating outliers to improve conceptual models of bedrock aquifers
NASA Astrophysics Data System (ADS)
Worthington, Stephen R. H.
2018-06-01
Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.
On numerical modeling of one-dimensional geothermal histories
Haugerud, R.A.
1989-01-01
Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.
Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model
2009-08-01
has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System
NASA Technical Reports Server (NTRS)
Bollenbacher, Gary; Guptill, James D.
1999-01-01
This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2007-01-01
This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.
NASA Astrophysics Data System (ADS)
Platiša, Ljiljana; Goossens, Bart; Vansteenkiste, Ewout; Badano, Aldo; Philips, Wilfried
2010-02-01
Clinical practice is rapidly moving in the direction of volumetric imaging. Often, radiologists interpret these images in liquid crystal displays at browsing rates of 30 frames per second or higher. However, recent studies suggest that the slow response of the display can compromise image quality. In order to quantify the temporal effect of medical displays on detection performance, we investigate two designs of a multi-slice channelized Hotelling observer (msCHO) model in the task of detecting a single-slice signal in multi-slice simulated images. The design of msCHO models is inspired by simplifying assumptions about how humans observe while viewing in the stack-browsing mode. For comparison, we consider a standard CHO applied only on the slice where the signal is located, recently used in a similar study. We refer to it as a single-slice CHO (ssCHO). Overall, our results confirm previous findings that the slow response of displays degrades the detection performance of the observers. More specifically, the observed performance range of msCHO designs is higher compared to the ssCHO suggesting that the extent and rate of degradation, though significant, may be less drastic than previously estimated by the ssCHO. Especially, the difference between msCHO and ssCHO is more significant for higher browsing speeds than for slow image sequences or static images. This, together with their design criteria driven by the assumptions about humans, makes the msCHO models promising candidates for further studies aimed at building anthropomorphic observer models for the stack-mode image presentation.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.
2015-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.
Why is it Doing That? - Assumptions about the FMS
NASA Technical Reports Server (NTRS)
Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)
1998-01-01
In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.
Quantum State Tomography via Reduced Density Matrices.
Xin, Tao; Lu, Dawei; Klassen, Joel; Yu, Nengkun; Ji, Zhengfeng; Chen, Jianxin; Ma, Xian; Long, Guilu; Zeng, Bei; Laflamme, Raymond
2017-01-13
Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However, it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work, we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally, we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results demonstrate the advantages and possible pitfalls of quantum state tomography with local measurements.
Accountability Policies and Teacher Decision Making: Barriers to the Use of Data to Improve Practice
ERIC Educational Resources Information Center
Ingram, Debra; Louis, Karen Seashore; Schroeder, Roger G.
2004-01-01
One assumption underlying accountability policies is that results from standardized tests and other sources will be used to make decisions about school and classroom practice. We explore this assumption using data from a longitudinal study of nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. We use the…
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Differential molar heat capacities to test ideal solubility estimations.
Neau, S H; Bhandarkar, S V; Hellmuth, E W
1997-05-01
Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.
NASA Astrophysics Data System (ADS)
Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui
2015-11-01
Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).
SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012
2012-01-01
return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a
Analysis of cavitation bubble dynamics in a liquid
NASA Technical Reports Server (NTRS)
Fontenot, L. L.; Lee, Y. C.
1971-01-01
General differential equations governing the dynamics of the cavitation bubbles in a liquid were derived. With the assumption of spherical symmetry the governing equations were simplified. Closed form solutions were obtained for simple cases, and numerical solutions were calculated for complicated ones. The growth and the collapse of the bubble were analyzed, oscillations of the bubbles were studied, and the stability of the cavitation bubbles were investigated. The results show that the cavitation bubbles are unstable, and the oscillation is not sinusoidal.
Perfect gas effects in compressible rapid distortion theory
NASA Technical Reports Server (NTRS)
Kerschen, E. J.; Myers, M. R.
1987-01-01
The governing equations presented for small amplitude unsteady disturbances imposed on steady, compressible mean flows that are two-dimensional and nearly uniform have their basis in the perfect gas equations of state, and therefore generalize previous results based on tangent gas theory. While these equations are more complex, this complexity is required for adequate treatment of high frequency disturbances, especially when the base flow Mach number is large; under such circumstances, the simplifying assumptions of tangent gas theory are not applicable.
The global strong solutions of Hasegawa-Mima-Charney-Obukhov equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hongjun; Zhu Anyou
2005-08-01
The quasigeostrophic model is a simplified geophysical fluid model at asymptotically high rotation rate or at small Rossby number. We consider the quasigeostrophic equation with no dissipation term which was obtained as an asymptotic model from the Euler equations with free surface under a quasigeostrophic velocity field assumption. It is called the Hasegawa-Mima-Charney-Obukhov equation, which also arises from plasmas theory. We use a priori estimates to get the global existence of strong solutions for an Hasegawa-Mima-Charney-Obukhov equation.
NASA Technical Reports Server (NTRS)
Farhangi, Shahram; Trent, Donnie (Editor)
1992-01-01
A study was directed towards assessing viability and effectiveness of an air augmented ejector/rocket. Successful thrust augmentation could potentially reduce a multi-stage vehicle to a single stage-to-orbit vehicle (SSTO) and, thereby, eliminate the associated ground support facility infrastructure and ground processing required by the eliminated stage. The results of this preliminary study indicate that an air augmented ejector/rocket propulsion system is viable. However, uncertainties resulting from simplified approach and assumptions must be resolved by further investigations.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
NASA Astrophysics Data System (ADS)
Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan
2017-04-01
Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cantor, R.; Schoepfle, M.
Communities at risk are confronted by an increasingly complex array of opportunities and need for involvement in decisions affecting them. Policy analysis often demands from researchers insights into the complicated process of how best to account for community involvement in decision making. Often, this requires additional understanding of how decisions are made by community members. Researchers trying to capture the important features of decision making will necessarily make assumptions regarding the rationality underlying the decision process. Two implicit and often incompatible sets of research assumptions about decision processes have emerged: outcome rationality and process rationality. Using outcome rationality, the principalmore » goal of risk research often is to predict how people will react to risk regardless of what they say they would do. Using process rationality, the research goal is to determine how people perceive the risks to which they are exposed and how perceptions actually influence responses. The former approach is associated with research in risk communication, conducted by economists and cognitive psychologists; the latter approach is associated with the field of risk negotiation and acceptance, conducted by anthropologists, some sociologists, and planners. This article describes (1) the difference between the assumptions behind outcome and process rationality regarding decision making and the problems resulting from these differences; (2) the promise and limitations of both sets of assumptions; (3) the potential contributions from cognitive psychology, cognitive ethnography, and the theory of transaction costs in reconciling the differences in assumptions and making them more complementary; and (4) the implications of such complementarity.« less
Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.
Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A
2010-07-01
We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.
Making It All Work: The Balancing Act
ERIC Educational Resources Information Center
Schaefbauer, Christi
2013-01-01
In this brief article, the author shares some strategies that make her work and home life easier and more gratifying: (1) Declutter and simplify; (2) Plan meals; (3) Use Google Calendar; (4) Automate bills; (5) Use technology; (6) Make health a priority; and (7) Lose the guilt.
Optical chirp z-transform processor with a simplified architecture.
Ngo, Nam Quoc
2014-12-29
Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.
Common-sense chemistry: The use of assumptions and heuristics in problem solving
NASA Astrophysics Data System (ADS)
Maeyer, Jenine Rachel
Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many interviewees seemed to view chemical reactions as macroscopic reassembling processes where favorability was related to the perceived ease with which reactants broke apart or products formed. Students also expressed spurious chemical assumptions based on the misinterpretation and overgeneralization of periodicity and electronegativity. Our findings suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts.
Early Retirement Is Not the Cat's Meow. The Endpaper.
ERIC Educational Resources Information Center
Ferguson, Wayne S.
1982-01-01
Early retirement plans are perceived as being beneficial to school staff and financially advantageous to schools. Four out of the five assumptions on which these perceptions are based are incorrect. The one correct assumption is that early retirement will make affirmative action programs move ahead more rapidly. The incorrect assumptions are: (1)…
Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.
ERIC Educational Resources Information Center
Ferrante, Karlene
Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…
Fission product ion exchange between zeolite and a molten salt
NASA Astrophysics Data System (ADS)
Gougar, Mary Lou D.
The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)
SU-E-T-293: Simplifying Assumption for Determining Sc and Sp
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, R; Cheung, A; Anderson, R
Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)
2000-01-01
Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.
Rethinking Use of the OML Model in Electric Sail Development
NASA Technical Reports Server (NTRS)
Stone, Nobie H.
2016-01-01
In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.
On the Weyl anomaly of 4D conformal higher spins: a holographic approach
NASA Astrophysics Data System (ADS)
Acevedo, S.; Aros, R.; Bugini, F.; Diaz, D. E.
2017-11-01
We present a first attempt to derive the full (type-A and type-B) Weyl anomaly of four dimensional conformal higher spin (CHS) fields in a holographic way. We obtain the type-A and type-B Weyl anomaly coefficients for the whole family of 4D CHS fields from the one-loop effective action for massless higher spin (MHS) Fronsdal fields evaluated on a 5D bulk Poincaré-Einstein metric with an Einstein metric on its conformal boundary. To gain access to the type-B anomaly coefficient we assume, for practical reasons, a Lichnerowicz-type coupling of the bulk Fronsdal fields with the bulk background Weyl tensor. Remarkably enough, our holographic findings under this simplifying assumption are certainly not unknown: they match the results previously found on the boundary counterpart under the assumption of factorization of the CHS higher-derivative kinetic operator into Laplacians of "partially massless" higher spins on Einstein backgrounds.
Review of Integrated Noise Model (INM) Equations and Processes
NASA Technical Reports Server (NTRS)
Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph
2003-01-01
The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).
NASA Technical Reports Server (NTRS)
Kvaternik, R. G.; Kaza, K. R. V.
1976-01-01
The nonlinear curvature expressions for a twisted rotor blade or a beam undergoing transverse bending in two planes, torsion, and extension were developed. The curvature expressions were obtained using simple geometric considerations. The expressions were first developed in a general manner using the geometrical nonlinear theory of elasticity. These general nonlinear expressions were then systematically reduced to four levels of approximation by imposing various simplifying assumptions, and in each of these levels the second degree nonlinear expressions were given. The assumptions were carefully stated and their implications with respect to the nonlinear theory of elasticity as applied to beams were pointed out. The transformation matrices between the deformed and undeformed blade-fixed coordinates, which were needed in the development of the curvature expressions, were also given for three of the levels of approximation. The present curvature expressions and transformation matrices were compared with corresponding expressions existing in the literature.
Monocular correspondence detection for symmetrical objects by template matching
NASA Astrophysics Data System (ADS)
Vilmar, G.; Besslich, Philipp W., Jr.
1990-09-01
We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L
2015-08-01
Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
A genuinely discontinuous approach for multiphase EHD problems
NASA Astrophysics Data System (ADS)
Natarajan, Mahesh; Desjardins, Olivier
2017-11-01
Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
Reticulate evolution and the human past: an anthropological perspective.
Winder, Isabelle C; Winder, Nick P
2014-01-01
The evidence is mounting that reticulate (web-like) evolution has shaped the biological histories of many macroscopic plants and animals, including non-human primates closely related to Homo sapiens, but the implications of this non-hierarchical evolution for anthropological enquiry are not yet fully understood. When they are understood, the result may be a paradigm shift in evolutionary anthropology. This paper reviews the evidence for reticulated evolution in the non-human primates and human lineage. Then it makes the case for extrapolating this sort of patterning to Homo sapiens and other hominins and explores the implications this would have for research design, method and understandings of evolution in anthropology. Reticulation was significant in human evolutionary history and continues to influence societies today. Anthropologists and human scientists-whether working on ancient or modern populations-thus need to consider the implications of non-hierarchic evolution, particularly where molecular clocks, mathematical models and simplifying assumptions about evolutionary processes are used. This is not just a problem for palaeoanthropology. The simple fact of different mating systems among modern human groups, for example, may demand that more attention is paid to the potential for complexity in human genetic and cultural histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, J.D.; Joiner, W.C.H.
1979-10-01
Flux-flow noise power spectra taken on Pb/sub 80/In/sub 20/ foils as a function of the orientation of the magnetic field with respect to the sample surfaces are used to study changes in frequencies and bundle sizes as distances of fluxoid traversal and fluxoid lengths change. The results obtained for the frequency dependence of the noise spectra are entirely consistent with our model for flux motion interrupted by pinning centers, provided one makes the reasonable assumption that the distance between pinning centers which a fluxoid may encounter scales inversely with the fluxoid length. The importance of pinning centers in determining themore » noise characteristics is also demonstrated by the way in which subpulse distributions and generalized bundle sizes are altered by changes in the metallurgical structure of the sample. In unannealed samples the dependence of bundle size on magnetic field orientation is controlled by a structural anisotropy, and we find a correlation between large bundle size and the absence of short subpulse times. Annealing removes this anisotropy, and we find a stronger angular variation of bundle size than would be expected using present simplified models.« less
Ergon, T.; Yoccoz, N.G.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In many species, age or time of maturation and survival costs of reproduction may vary substantially within and among populations. We present a capture-mark-recapture model to estimate the latent individual trait distribution of time of maturation (or other irreversible transitions) as well as survival differences associated with the two states (representing costs of reproduction). Maturation can take place at any point in continuous time, and mortality hazard rates for each reproductive state may vary according to continuous functions over time. Although we explicitly model individual heterogeneity in age/time of maturation, we make the simplifying assumption that death hazard rates do not vary among individuals within groups of animals. However, the estimates of the maturation distribution are fairly robust against individual heterogeneity in survival as long as there is no individual level correlation between mortality hazards and latent time of maturation. We apply the model to biweekly capture?recapture data of overwintering field voles (Microtus agrestis) in cyclically fluctuating populations to estimate time of maturation and survival costs of reproduction. Results show that onset of seasonal reproduction is particularly late and survival costs of reproduction are particularly large in declining populations.
The Effects of Accretion Disk Geometry on AGN Reflection Spectra
NASA Astrophysics Data System (ADS)
Taylor, Corbin James; Reynolds, Christopher S.
2017-08-01
Despite being the gravitational engines that power galactic-scale winds and mega parsec-scale jets in active galaxies, black holes are remarkably simple objects, typically being fully described by their angular momenta (spin) and masses. The modelling of AGN X-ray reflection spectra has proven fruitful in estimating the spin of AGN, as well as giving insight into their accretion histories and the properties of plasmas in the strong gravity regime. However, current models make simplifying assumptions about the geometry of the reflecting material in the accretion disk and the irradiating X-ray corona, approximating the disk as an optically thick, infinitely thin disk of material in the orbital plane. We present results from the new relativistic raytracing suite, Fenrir, that explore the effects that disk thickness may have on the reflection spectrum and the accompanying reverberation signatures. Approximating the accretion disk as an optically thick, geometrically thin, radiation pressure dominated disk (Shakura & Sunyaev 1973), one finds that the disk geometry is non-negligible in many cases, with significant changes in the broad Fe K line profile. Finally, we explore the systematic errors inherent in approximating the disk as being infinitely thin when modeling reflection spectrum, potentially biasing determinations of black hole and corona properties.
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Gluyas, Jon G.; GonzáLez MartíNez de Miguel, Gerardo J.; Hosseini, Seyyed A.
2011-12-01
This work extends an existing analytical solution for pressure buildup because of CO2 injection in brine aquifers by incorporating effects associated with partial miscibility. These include evaporation of water into the CO2 rich phase and dissolution of CO2 into brine and salt precipitation. The resulting equations are closed-form, including the locations of the associated leading and trailing shock fronts. Derivation of the analytical solution involves making a number of simplifying assumptions including: vertical pressure equilibrium, negligible capillary pressure, and constant fluid properties. The analytical solution is compared to results from TOUGH2 and found to accurately approximate the extent of the dry-out zone around the well, the resulting permeability enhancement due to residual brine evaporation, the volumetric saturation of precipitated salt, and the vertically averaged pressure distribution in both space and time for the four scenarios studied. While brine evaporation is found to have a considerable effect on pressure, the effect of CO2 dissolution is found to be small. The resulting equations remain simple to evaluate in spreadsheet software and represent a significant improvement on current methods for estimating pressure-limited CO2 storage capacity.
The Role of Semantic Clustering in Optimal Memory Foraging.
Montez, Priscilla; Thompson, Graham; Kello, Christopher T
2015-11-01
Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in semantic memory may play a role in evidence for both theories. Labeled magnets and a whiteboard were used to elicit spatial representations of semantic knowledge about animals. Category recall sequences from a separate experiment were used to trace search paths over the spatial representations of animal knowledge. Results showed that spatial distances between animal names arranged on the whiteboard were correlated with inter-response intervals (IRIs) during category recall, and distributions of both dependent measures approximated inverse power laws associated with Lévy flights. In addition, IRIs were relatively shorter when paths first entered animal clusters, and longer when they exited clusters, which is consistent with marginal value theorem. In conclusion, area-restricted searches over clustered semantic spaces may account for two different patterns of results interpreted as supporting two different theories of optimal memory foraging. Copyright © 2015 Cognitive Science Society, Inc.
The Effects of Accretion Disk Thickness on the Black Hole Reflection Spectrum
NASA Astrophysics Data System (ADS)
Taylor, Corbin; Reynolds, Christopher S.
2018-01-01
Despite being the gravitational engines that power galactic-scale winds and mega parsec-scale jets in active galaxies, black holes are remarkably simple objects, typically being fully described by their angular momenta (spin) and masses. The modelling of AGN X-ray reflection spectra has proven fruitful in estimating the spin of AGN, as well as giving insight into their accretion histories and into the properties of plasmas in the strong gravity regime. However, current models make simplifying assumptions about the geometry of the reflecting material in the accretion disk and the irradiating X-ray corona, approximating the disk as an optically thick, infinitely thin disk of material in the orbital plane. We present results from the new relativistic raytracing suite, Fenrir, that explore the effects that disk thickness may have on the reflection spectrum and the accompanying reverberation signatures. Approximating the accretion disk as an optically thick, geometrically thin, radiation pressure dominated disk (Shakura & Sunyaev 1973), one finds that the disk geometry is non-negligible in many cases, with significant changes in the broad Fe K line profile. Finally, we explore the systematic errors inherent in other contemporary models that approximate that disk as having negligible vertical extent.
An improved approach of register allocation via graph coloring
NASA Astrophysics Data System (ADS)
Gao, Lei; Shi, Ce
2005-03-01
Register allocation is an important part of optimizing compiler. The algorithm of register allocation via graph coloring is implemented by Chaitin and his colleagues firstly and improved by Briggs and others. By abstracting register allocation to graph coloring, the allocation process is simplified. As the physical register number is limited, coloring of the interference graph can"t succeed for every node. The uncolored nodes must be spilled. There is an assumption that almost all the allocation method obeys: when a register is allocated to a variable v, it can"t be used by others before v quit even if v is not used for a long time. This may causes a waste of register resource. The authors relax this restriction under certain conditions and make some improvement. In this method, one register can be mapped to two or more interfered "living" live ranges at the same time if they satisfy some requirements. An operation named merge is defined which can arrange two interfered nodes occupy the same register with some cost. Thus, the resource of register can be used more effectively and the cost of memory access can be reduced greatly.
Thermal stress in high temperature cylindrical fasteners
NASA Technical Reports Server (NTRS)
Blosser, Max L.
1988-01-01
Uninsulated structures fabricated from carbon or silicon-based materials, which are allowed to become hot during flight, are attractive for the design of some components of hypersonic vehicles. They have the potential to reduce weight and increase vehicle efficiency. Because of manufacturing contraints, these structures will consist of parts which must be fastened together. The thermal expansion mismatch between conventional metal fasteners and carbon or silicon-based structural materials may make it difficult to design a structural joint which is tight over the operational temperature range without exceeding allowable stress limits. In this study, algebraic, closed-form solutions for calculating the thermal stresses resulting from radial thermal expansion mismatch around a cylindrical fastener are developed. These solutions permit a designer to quickly evaluate many combinations of materials for the fastener and the structure. Using the algebraic equations developed, material properties and joint geometry were varied to determine their effect on thermal stresses. Finite element analyses were used to verify that the closed-form solutions derived give the correct thermal stress distribution around a cylindrical fastener and to investigate the effect of some of the simplifying assumptions made in developing the closed-form solutions for thermal stresses.
The geography of spatial synchrony.
Walter, Jonathan A; Sheppard, Lawrence W; Anderson, Thomas L; Kastens, Jude H; Bjørnstad, Ottar N; Liebhold, Andrew M; Reuman, Daniel C
2017-07-01
Spatial synchrony, defined as correlated temporal fluctuations among populations, is a fundamental feature of population dynamics, but many aspects of synchrony remain poorly understood. Few studies have examined detailed geographical patterns of synchrony; instead most focus on how synchrony declines with increasing linear distance between locations, making the simplifying assumption that distance decay is isotropic. By synthesising and extending prior work, we show how geography of synchrony, a term which we use to refer to detailed spatial variation in patterns of synchrony, can be leveraged to understand ecological processes including identification of drivers of synchrony, a long-standing challenge. We focus on three main objectives: (1) showing conceptually and theoretically four mechanisms that can generate geographies of synchrony; (2) documenting complex and pronounced geographies of synchrony in two important study systems; and (3) demonstrating a variety of methods capable of revealing the geography of synchrony and, through it, underlying organism ecology. For example, we introduce a new type of network, the synchrony network, the structure of which provides ecological insight. By documenting the importance of geographies of synchrony, advancing conceptual frameworks, and demonstrating powerful methods, we aim to help elevate the geography of synchrony into a mainstream area of study and application. © 2017 John Wiley & Sons Ltd/CNRS.
On Maximizing Item Information and Matching Difficulty with Ability.
ERIC Educational Resources Information Center
Bickel, Peter; Buyske, Steven; Chang, Huahua; Ying, Zhiliang
2001-01-01
Examined the assumption that matching difficulty levels of test items with an examinee's ability makes a test more efficient and challenged this assumption through a class of one-parameter item response theory models. Found the validity of the fundamental assumption to be closely related to the van Zwet tail ordering of symmetric distributions (W.…
Cassata, W. S.; Borg, L. E.
2016-05-04
Anomalously old 40Ar/ 39Ar ages are commonly obtained from Shergottites and are generally attributed to uncertainties regarding the isotopic composition of the trapped component and/or the presence of excess 40Ar. Old ages can also be obtained if inaccurate corrections for cosmogenic 36Ar are applied. Current methods for making the cosmogenic correction require simplifying assumptions regarding the spatial homogeneity of target elements for cosmogenic production and the distribution of cosmogenic nuclides relative to trapped and reactor-derived Ar isotopes. To mitigate uncertainties arising from these assumptions, a new cosmogenic correction approach utilizing the exposure age determined on an un-irradiated aliquot and step-wisemore » production rate estimates that account for spatial variations in Ca and K is described. Data obtained from NWA 4468 and an unofficial pairing of NWA 2975, which yield anomalously old ages when corrected for cosmogenic 36Ar using conventional techniques, are used to illustrate the efficacy of this new approach. For these samples, anomalous age determinations are rectified solely by the improved cosmogenic correction technique described herein. Ages of 188 ± 17 and 184 ± 17 Ma are obtained for NWA 4468 and NWA 2975, respectively, both of which are indistinguishable from ages obtained by other radioisotopic systems. For other Shergottites that have multiple trapped components, have experienced diffusive loss of Ar, or contain excess Ar, more accurate cosmogenic corrections may aid in the interpretation of anomalous ages. In conclusion, the trapped 40Ar/ 36Ar ratios inferred from inverse isochron diagrams obtained from NWA 4468 and NWA 2975 are significantly lower than the Martian atmospheric value, and may represent upper mantle or crustal components.« less
Are We Ready for Real-world Neuroscience?
Matusz, Pawel J; Dikker, Suzanne; Huth, Alexander G; Perrodin, Catherine
2018-06-19
Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.
Generation and Scaling of the African Landscape.
NASA Astrophysics Data System (ADS)
O'Malley, C.; White, N.; Roberts, G. G.
2017-12-01
An inventory of > 1500 longitudinal river profiles across Africa contains correlatable signals that can be inverted to determine a Neogene regional uplift history. This history can be tested using a range of geologic and geophysical observations. However, this approach makes simplifying assumptions about landscape erodibility through time and space (i.e. lithologic contrasts, precipitation rates, drainage stability). Here, we investigate the validity of these assumptions by carrying out a series of naturalistic landscape simulations using the Badlands and Landlab models. First, forward simulations were run with constant erodibility, using an uplift rate history determined by inverse modeling. The resultant drainage network and pattern of offshore sedimentary deposition reproduce the large-scale characteristics of the African landscape surprisingly well. This result implies that regional tectonic forcing plays a significant role in configuring drainage patterns. Secondly, the effects of varying precipitation through time and space are investigated. Since solutions to the stream power law are integrative, precipitation changes on timescales of less than 5—10 Ma have negligible influence on the resultant landscape. Finally, power spectral analyses of major African rivers that traverse significantly different climatic zones, lithologic boundaries, and biotic distributions reveal consistent scaling laws. At wavelengths of ≳ 102 km, spectra have slopes of -2, indicative of red (i.e. Brownian) noise. At wavelengths of ≲ 102 km, there is a cross-over transition to slopes of -1, consistent with pink noise. Onset of this transition suggests that spatially correlated noise generated by instabilities in water flow and by lithologic changes becomes prevalent at shorter wavelengths. Our analysis suggests that advective models of fluvial erosion are driven by a combination of external forcing and stochastic noise.
Orthogonal vector algorithm to obtain the solar vector using the single-scattering Rayleigh model.
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Shi, Chao
2018-02-01
Information obtained from a polarization pattern in the sky provides many animals like insects and birds with vital long-distance navigation cues. The solar vector can be derived from the polarization pattern using the single-scattering Rayleigh model. In this paper, an orthogonal vector algorithm, which utilizes the redundancy of the single-scattering Rayleigh model, is proposed. We use the intersection angles between the polarization vectors as the main criteria in our algorithm. The assumption that all polarization vectors can be considered coplanar is used to simplify the three-dimensional (3D) problem with respect to the polarization vectors in our simulation. The surface-normal vector of the plane, which is determined by the polarization vectors after translation, represents the solar vector. Unfortunately, the two-directionality of the polarization vectors makes the resulting solar vector ambiguous. One important result of this study is, however, that this apparent disadvantage has no effect on the complexity of the algorithm. Furthermore, two other universal least-squares algorithms were investigated and compared. A device was then constructed, which consists of five polarized-light sensors as well as a 3D attitude sensor. Both the simulation and experimental data indicate that the orthogonal vector algorithms, if used with a suitable threshold, perform equally well or better than the other two algorithms. Our experimental data reveal that if the intersection angles between the polarization vectors are close to 90°, the solar-vector angle deviations are small. The data also support the assumption of coplanarity. During the 51 min experiment, the mean of the measured solar-vector angle deviations was about 0.242°, as predicted by our theoretical model.
Colorectal cancer patients' attitudes towards involvement in decision making.
Beaver, Kinta; Campbell, Malcolm; Craven, Olive; Jones, David; Luker, Karen A; Susnerwala, Shabbir S
2009-03-01
To design and administer an attitude rating scale, exploring colorectal cancer patients' views of involvement in decision making. To examine the impact of socio-demographic and/or treatment-related factors on decision making. To conduct principal components analysis to determine if the scale could be simplified into a number of factors for future clinical utility. An attitude rating scale was constructed based on previous qualitative work and administered to colorectal cancer patients using a cross-sectional survey approach. 375 questionnaires were returned (81.7% response). For patients it was important to be informed and involved in the decision-making process. Information was not always used to make decisions as patients placed their trust in medical expertise. Women had more positive opinions on decision making and were more likely to want to make decisions. Written information was understood to a greater degree than verbal information. The scale could be simplified to a number of factors, indicating clinical utility. Few studies have explored the attitudes of colorectal cancer patients towards involvement in decision making. This study presents new insights into how patients view the concept of participation; important when considering current policy imperatives in the UK of involving service users in all aspects of care and treatment.
ERIC Educational Resources Information Center
Nachlieli, Talli; Herbst, Patricio
2009-01-01
This article reports on an investigation of how teachers of geometry perceived an episode of instruction presented to them as a case of engaging students in proving. Confirming what was hypothesized, participants found it remarkable that a teacher would allow a student to make an assumption while proving. But they perceived this episode in various…
Integrating Behavioral Technology into Public Schools.
ERIC Educational Resources Information Center
Axelrod, Saul
1993-01-01
Suggests seven measures that behavioral educators can take to make effective educational procedures available in public schools: make dissemination of effective technology first priority; develop comprehensive educational systems; simplify existing effective procedures; create market for effective educational technology; obtain new measures of…
Kushalnagar, Poorna; Smith, Scott; Hopper, Melinda; Ryan, Claire; Rinkevich, Micah; Kushalnagar, Raja
2018-02-01
People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecseli, H. L.; Trulsen, J.
2009-10-08
Experimental as well as theoretical studies have demonstrated that turbulence can play an important role for the biosphere in marine environments, in particular also by affecting prey-predator encounter rates. Reference models for the encounter rates rely on simplifying assumptions of predators and prey being described as point particles moving passively with the local flow velocity. Based on simple arguments that can be tested experimentally we propose corrections for the standard expression for the encounter rates, where now finite sizes and Stokes drag effects are included.
Calculation of load distribution in stiffened cylindrical shells
NASA Technical Reports Server (NTRS)
Ebner, H; Koller, H
1938-01-01
Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.
Orbital geocentric oddness. (French Title: Bizarreries orbitales géocentriques)
NASA Astrophysics Data System (ADS)
Bassinot, E.
2013-09-01
The purpose of this essay is to determine the geocentric path of our superior neighbour, the planet Mars called like the God of the war.In other words,the question is : seen from our blue planet, what is the orbit of the red one? Based upon three simplifying and justified assumptions,it is proved hereunder with a purely geometrical approach,that Mars describes a curve very close to the well known Pascal's snail. The loop shown by this curve explains easily the apparently erratic behaviour of Mars.
Stress Analysis of Beams with Shear Deformation of the Flanges
NASA Technical Reports Server (NTRS)
Kuhn, Paul
1937-01-01
This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.
A study of trends and techniques for space base electronics
NASA Technical Reports Server (NTRS)
Trotter, J. D.; Wade, T. E.; Gassaway, J. D.
1979-01-01
The use of dry processing and alternate dielectrics for processing wafers is reported. A two dimensional modeling program was written for the simulation of short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide-silicon interface. In solving current continuity equation, the program does not converge. However, solving the two dimensional Poisson equation for the potential distribution was achieved. The status of other 2D MOSFET simulation programs are summarized.
The effect of the behavior of an average consumer on the public debt dynamics
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele
2017-09-01
An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-01-01
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798
Population-expression models of immune response
NASA Astrophysics Data System (ADS)
Stromberg, Sean P.; Antia, Rustom; Nemenman, Ilya
2013-06-01
The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable.
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana
2004-01-01
The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Assessment of railway wagon suspension characteristics
NASA Astrophysics Data System (ADS)
Soukup, Josef; Skočilas, Jan; Skočilasová, Blanka
2017-05-01
The article deals with assessment of railway wagon suspension characteristics. The essential characteristics of a suspension are represented by the stiffness constants of the equivalent springs and the eigen frequencies of the oscillating movements in reference to the main central inertia axes of a vehicle. The premise of the experimental determination of these characteristic is the knowledge of the gravity center position and the knowledge of the main central inertia moments of the vehicle frame. The vehicle frame performs the general spatial movement when the vehicle moves. An analysis of the frame movement generally arises from Euler's equations which are commonly used for the description of the spherical movement. This solution is difficult and it can be simplified by applying the specific assumptions. The eigen frequencies solutions and solutions of the suspension stiffness are presented in the article. The solutions are applied on the railway and road vehicles with the simplifying conditions. A new method which assessed the characteristics is described in the article.
Intentionality, degree of damage, and moral judgments.
Berg-Cross, L G
1975-12-01
153 first graders were given Piagetian moral judgment problems with a new simplified methodology as well as the usual story-pair paradigm. The new methodology involved making quantitative judgments about single stories and examined the influence of level of intentionality and degree of damage upon absolute punishment ratings. Contrary to results obtained with a story-pair methodology, it was found that with single stories even 6-year-old children responded to the level of intention in the stories as well as the quantity and quality of damage involved. This suggested that Piaget's methodology may be forcing children to employ a simplifying strategy while under other conditions they are able to perform the mental operations necessary to make complex moral judgments.
Li, Qing-Rong; Luo, Jia-Ling; Zhou, Zhong-Hua; Wang, Guang-Ying; Chen, Rui; Cheng, Shi; Wu, Min; Li, Hui; Ni, He; Li, Hai-Hang
2018-04-15
The industry discards generous organic wastewater in sweet potato starch factory and scrap tea in tea production. A simplified procedure to recover all biochemicals from the wastewater of sweet potato starch factory and use them to make health black tea and theaflavins from scrap green tea was developed. The sweet potato wastewater was sequentially treated by isoelectric precipitation, ultrafiltration and nanofiltration to recover polyphenol oxidase (PPO), β-amylase, and small molecular fractions, respectively. The PPO fraction can effectively transform green tea extracts into black tea with high content of theaflavins through the optimized fed-batch feeding fermentation. The PPO transformed black tea with sporamins can be used to make health black tea, or make theaflavins by fractionation with ethyl acetate. This work provides a resource- and environment-friendly approach for economically utilizing the sweet potato wastewater and the scrap tea, and making biochemical, nutrient and health products. Copyright © 2017 Elsevier Ltd. All rights reserved.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
NASA Astrophysics Data System (ADS)
Gupta, S.; Deusner, C.; Haeckel, M.; Helmig, R.; Wohlmuth, B.
2017-09-01
Natural gas hydrates are considered a potential resource for gas production on industrial scales. Gas hydrates contribute to the strength and stiffness of the hydrate-bearing sediments. During gas production, the geomechanical stability of the sediment is compromised. Due to the potential geotechnical risks and process management issues, the mechanical behavior of the gas hydrate-bearing sediments needs to be carefully considered. In this study, we describe a coupling concept that simplifies the mathematical description of the complex interactions occurring during gas production by isolating the effects of sediment deformation and hydrate phase changes. Central to this coupling concept is the assumption that the soil grains form the load-bearing solid skeleton, while the gas hydrate enhances the mechanical properties of this skeleton. We focus on testing this coupling concept in capturing the overall impact of geomechanics on gas production behavior though numerical simulation of a high-pressure isotropic compression experiment combined with methane hydrate formation and dissociation. We consider a linear-elastic stress-strain relationship because it is uniquely defined and easy to calibrate. Since, in reality, the geomechanical response of the hydrate-bearing sediment is typically inelastic and is characterized by a significant shear-volumetric coupling, we control the experiment very carefully in order to keep the sample deformations small and well within the assumptions of poroelasticity. The closely coordinated experimental and numerical procedures enable us to validate the proposed simplified geomechanics-to-flow coupling, and set an important precursor toward enhancing our coupled hydro-geomechanical hydrate reservoir simulator with more suitable elastoplastic constitutive models.
Morel, Yann G.; Favoretto, Fabio
2017-01-01
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028
Morel, Yann G; Favoretto, Fabio
2017-07-21
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.
Colorectal cancer patients’ attitudes towards involvement in decision making
Beaver, Kinta; Campbell, Malcolm; Craven, Olive; Jones, David; Luker, Karen A.; Susnerwala, Shabbir S.
2009-01-01
Abstract Objectives To design and administer an attitude rating scale, exploring colorectal cancer patients’ views of involvement in decision making. To examine the impact of socio‐demographic and/or treatment‐related factors on decision making. To conduct principal components analysis to determine if the scale could be simplified into a number of factors for future clinical utility. Methods An attitude rating scale was constructed based on previous qualitative work and administered to colorectal cancer patients using a cross‐sectional survey approach. Results 375 questionnaires were returned (81.7% response). For patients it was important to be informed and involved in the decision‐making process. Information was not always used to make decisions as patients placed their trust in medical expertise. Women had more positive opinions on decision making and were more likely to want to make decisions. Written information was understood to a greater degree than verbal information. The scale could be simplified to a number of factors, indicating clinical utility. Conclusion Few studies have explored the attitudes of colorectal cancer patients towards involvement in decision making. This study presents new insights into how patients view the concept of participation; important when considering current policy imperatives in the UK of involving service users in all aspects of care and treatment. PMID:19250150
NASA Astrophysics Data System (ADS)
Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.
2016-12-01
Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.
Cognitive-psychology expertise and the calculation of the probability of a wrongful conviction.
Rouder, Jeffrey N; Wixted, John T; Christenfeld, Nicholas J S
2018-05-08
Cognitive psychologists are familiar with how their expertise in understanding human perception, memory, and decision-making is applicable to the justice system. They may be less familiar with how their expertise in statistical decision-making and their comfort working in noisy real-world environments is just as applicable. Here we show how this expertise in ideal-observer models may be leveraged to calculate the probability of guilt of Gary Leiterman, a man convicted of murder on the basis of DNA evidence. We show by common probability theory that Leiterman is likely a victim of a tragic contamination event rather than a murderer. Making any calculation of the probability of guilt necessarily relies on subjective assumptions. The conclusion about Leiterman's innocence is not overly sensitive to the assumptions-the probability of innocence remains high for a wide range of reasonable assumptions. We note that cognitive psychologists may be well suited to make these calculations because as working scientists they may be comfortable with the role a reasonable degree of subjectivity plays in analysis.
76 FR 58252 - Applications for New Awards; Statewide, Longitudinal Data Systems Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-20
... DEPARTMENT OF EDUCATION Applications for New Awards; Statewide, Longitudinal Data Systems Program... analysis and informed decision- making at all levels of the education system, increase the efficiency with... accountability systems, and simplify the processes used by SEAs to make education data transparent through...
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
Nti-Gyabaah, J; Chmielowski, R; Chan, V; Chiew, Y C
2008-07-09
Accurate experimental determination of solubility of active pharmaceutical ingredients (APIs) in solvents and its correlation, for solubility prediction, is essential for rapid design and optimization of isolation, purification, and formulation processes in the pharmaceutical industry. An efficient material-conserving analytical method, with in-line reversed HPLC separation protocol, has been developed to measure equilibrium solubility of lovastatin in ethanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, and 1-octanol between 279 and 313K. Fusion enthalpy DeltaH(fus), melting point temperature, Tm, and the differential molar heat capacity, DeltaC(P), were determined by differential scanning calorimetry (DSC) to be 43,136J/mol, 445.5K, and 255J/(molK), respectively. In order to use the regular solution equation, simplified assumptions have been made concerning DeltaC(P), specifically, DeltaC(P)=0, or DeltaC(P)=DeltaS. In this study, we examined the extent to which these assumptions influence the magnitude of the ideal solubility of lovastatin, and determined that both assumptions underestimate the ideal solubility of lovastatin. The solubility data was used with the calculated ideal solubility to obtain activity coefficients, which were then fitted to the van't Hoff-like regular solution equation. Examination of the plots indicated that both assumptions give erroneous excess enthalpy of solution, H(infinity), and hence thermodynamically inconsistent activity coefficients. The order of increasing ideality, or solubility of lovastatin was butanol>1-propanol>1-pentanol>1-hexanol>1-octanol.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
The Valuation of Scientific and Technical Experiments
NASA Technical Reports Server (NTRS)
Williams, F. E.
1972-01-01
Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.
Uncertainty about fundamentals and herding behavior in the FOREX market
NASA Astrophysics Data System (ADS)
Kaltwasser, Pablo Rovira
2010-03-01
It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.
Impact of cell size on inventory and mapping errors in a cellular geographic information system
NASA Technical Reports Server (NTRS)
Wehde, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.
NASA Astrophysics Data System (ADS)
Akbar, Noreen Sher; Mustafa, M. T.
2015-07-01
In the present article ferromagnetic field effects for copper nanoparticles for blood flow through composite permeable stenosed arteries is discussed. The copper nanoparticles for the blood flow with water as base fluid with different nanosize particles is not explored upto yet. The equations for the Cu-water nanofluid are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. Effect of various flow parameters on the flow and heat transfer characteristics are utilized.
Thermal effectiveness of multiple shell and tube pass TEMA E heat exchangers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pignotti, A.; Tamborenea, P.I.
1988-02-01
The thermal effectiveness of a TEMAE shell-and-tube heat exchanger, with one shell pass and an arbitrary number of tube passes, is determined under the usual simplifying assumptions of perfect transverse mixing of the shell fluid, no phase change, and temperature independence of the heat capacity rates and the heat transfer coefficient. A purely algebraic solution is obtained for the effectiveness as a functions of the heat capacity rate ratio and the number of heat transfer units. The case with M shell passes and N tube passes is easily expressed in terms of the single-shell-pass case.
Generalization of low pressure, gas-liquid, metastable sound speed to high pressures
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1981-01-01
A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.
Ferrofluids: Modeling, numerical analysis, and scientific computation
NASA Astrophysics Data System (ADS)
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
Refracted arrival waves in a zone of silence from a finite thickness mixing layer.
Suzuki, Takao; Lele, Sanjiva K
2002-02-01
Refracted arrival waves which propagate in the zone of silence of a finite thickness mixing layer are analyzed using geometrical acoustics in two dimensions. Here, two simplifying assumptions are made: (i) the mean flow field is transversely sheared, and (ii) the mean velocity and temperature profiles approach the free-stream conditions exponentially. Under these assumptions, ray trajectories are analytically solved, and a formula for acoustic pressure amplitude in the far field is derived in the high-frequency limit. This formula is compared with the existing theory based on a vortex sheet corresponding to the low-frequency limit. The analysis covers the dependence on the Mach number as well as on the temperature ratio. The results show that both limits have some qualitative similarities, but the amplitude in the zone of silence at high frequencies is proportional to omega(-1/2), while that at low frequencies is proportional to omega(-3/2), omega being the angular frequency of the source.
Perrodin, Yves; Babut, Marc; Bedell, Jean-Philippe; Bray, Marc; Clement, Bernard; Delolme, Cécile; Devaux, Alain; Durrieu, Claude; Garric, Jeanne; Montuelle, Bernard
2006-08-01
The implementation of an ecological risk assessment framework is presented for dredged material deposits on soil close to a canal and groundwater, and tested with sediment samples from canals in northern France. This framework includes two steps: a simplified risk assessment based on contaminant concentrations and a detailed risk assessment based on toxicity bioassays and column leaching tests. The tested framework includes three related assumptions: (a) effects on plants (Lolium perenne L.), (b) effects on aquatic organisms (Escherichia coli, Pseudokirchneriella subcapitata, Ceriodaphnia dubia, and Xenopus laevis) and (c) effects on groundwater contamination. Several exposure conditions were tested using standardised bioassays. According to the specific dredged material tested, the three assumptions were more or less discriminatory, soil and groundwater pollution being the most sensitive. Several aspects of the assessment procedure must now be improved, in particular assessment endpoint design for risks to ecosystems (e.g., integration of pollutant bioaccumulation), bioassay protocols and column leaching test design.
Tests for the extraction of Boer-Mulders functions
NASA Astrophysics Data System (ADS)
Christova, Ekaterina; Leader, Elliot; Stoilov, Michail
2017-12-01
At present, the Boer-Mulders (BM) functions are extracted from asymmetry data using the simplifying assumption of their proportionality to the Sivers functions for each quark flavour. Here we present two independent tests for this assumption. We subject COMPASS data on semi-inclusive deep inelastic scattering on the 〈cos ϕh 〉, 〈cos 2ϕh 〉 and Sivers asymmetries to these tests. Our analysis shows that the tests are satisfied with the available data if the proportionality constant is the same for all quark flavours, which does not correspond to the flavour dependence used in existing analyses. This suggests that the published information on the BM functions may be unreliable. The 〈cos ϕh 〉, 〈cos 2ϕh 〉 asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.
Moisture Risk in Unvented Attics Due to Air Leakage Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prahl, D.; Shaffer, M.
2014-11-01
IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Wärme und Feuchte instationär Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less
Moisture Risk in Unvented Attics Due to Air Leakage Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prahl, D.; Shaffer, M.
2014-11-01
IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Warme und Feuchte instationar Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less
Calculation of wall effects of flow on a perforated wall with a code of surface singularities
NASA Astrophysics Data System (ADS)
Piat, J. F.
1994-07-01
Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).
Two time scale output feedback regulation for ill-conditioned systems
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1986-01-01
Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.
The Doctor Is In! Diagnostic Analysis.
Jupiter, Daniel C
To make meaningful inferences based on our regression models, we must ensure that we have met the necessary assumptions of these tests. In this commentary, we review these assumptions and those for the t-test and analysis of variance, and introduce a variety of methods, formal and informal, numeric and visual, for assessing conformity with the assumptions. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system
NASA Astrophysics Data System (ADS)
Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu
2001-07-01
A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less
A mathematics for medicine: The Network Effect
West, Bruce J.
2014-01-01
The theory of medicine and its complement systems biology are intended to explain the workings of the large number of mutually interdependent complex physiologic networks in the human body and to apply that understanding to maintaining the functions for which nature designed them. Therefore, when what had originally been made as a simplifying assumption or a working hypothesis becomes foundational to understanding the operation of physiologic networks it is in the best interests of science to replace or at least update that assumption. The replacement process requires, among other things, an evaluation of how the new hypothesis affects modern day understanding of medical science. This paper identifies linear dynamics and Normal statistics as being such arcane assumptions and explores some implications of their retirement. Specifically we explore replacing Normal with fractal statistics and examine how the latter are related to non-linear dynamics and chaos theory. The observed ubiquity of inverse power laws in physiology entails the need for a new calculus, one that describes the dynamics of fractional phenomena and captures the fractal properties of the statistics of physiological time series. We identify these properties as a necessary consequence of the complexity resulting from the network dynamics and refer to them collectively as The Network Effect. PMID:25538622
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H
2017-07-01
Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.
Effects of osmotic pressure in the extracellular matrix on tissue deformation.
Lu, Y; Parker, K H; Wang, W
2006-06-15
In soft tissues, large molecules such as proteoglycans trapped in the extracellular matrix (ECM) generate high levels of osmotic pressure to counter-balance external pressures. The semi-permeable matrix and fixed negative charges on these molecules serve to promote the swelling of tissues when there is an imbalance of molecular concentrations. Structural molecules, such as collagen fibres, form a network of stretch-resistant matrix, which prevents tissue from over-swelling and keeps tissue integrity. However, collagen makes little contribution to load bearing; the osmotic pressure in the ECM is the main contributor balancing external pressures. Although there have been a number of studies on tissue deformation, there is no rigorous analysis focusing on the contribution of the osmotic pressure in the ECM on the viscoelastic behaviour of soft tissues. Furthermore, most previous works were carried out based on the assumption of infinitesimal deformation, whereas tissue deformation is finite under physiological conditions. In the current study, a simplified mathematical model is proposed. Analytic solutions for solute distribution in the ECM and the free-moving boundary were derived by solving integro-differential equations under constant and dynamic loading conditions. Osmotic pressure in the ECM is found to contribute significantly to the viscoelastic characteristics of soft tissues during their deformation.
Volume sharing of reservoir water
NASA Astrophysics Data System (ADS)
Dudley, Norman J.
1988-05-01
Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
Predicted Static Aeroelastic Effects on Wings with Supersonic Leading Edges and Streamwise Tips
NASA Technical Reports Server (NTRS)
Brown, Stuart C.
1959-01-01
A method is presented for calculation of static aeroelastic effects on wings with supersonic leading edges and streamwise tips. Both chord-wise and spanwise deflections are taken into account. Aerodynamic and structural forces are introduced in influence coefficient form; the former are developed from linearized supersonic wing theory and the latter are assumed to be known from load-deflection tests or theory. The predicted effects of flexibility on lateral-control effectiveness, damping in roll, and lift-curve slope are shown for a low-aspect-ratio wing at Mach numbers of 1.25 and 2.60. The control effectiveness is shown for a trailing-edge aileron, a tip aileron, and a slot-deflector spoiler located along the 0.70 chord line. The calculations indicate that the tip aileron is particularly attractive from an aeroelastic standpoint, because the changes in effectiveness with dynamic pressure are small compared to the changes in effectiveness of the trailing-edge aileron and slot-deflector spoiler. The effects of making several simplifying assumptions in the example calculations are shown. The use of a modified strip theory to determine the aerodynamic influence coefficients gave adequate results only for the high Mach number case. Elimination of chordwise bending in the structural influence coefficients exaggerated the aeroelastic effects on rolling-moment and lift coefficients for both Mach numbers.
Yazdanbakhsh, Ardavan
2018-04-27
Several pioneering life cycle assessment (LCA) studies have been conducted in the past to assess the environmental impact of specific methods for managing mineral construction and demolition waste (MCDW), such as recycling the waste for use in concrete. Those studies focus on comparing the use of recycled MCDW and that of virgin components to produce materials or systems that serve specified functions. Often, the approaches adopted by the studies do not account for the potential environmental consequence of avoiding the existing or alternative waste management practices. The present work focuses on how product systems need to be defined in recycling LCA studies and what processes need to be within the system boundaries. A bi-level LCA framework is presented for modelling alternative waste management approaches in which the impacts are measured and compared at two scales of strategy and decision-making. Different functional units are defined for each level, all of which correspond to the same flow of MCDW in a cascade of product systems. For the sole purpose of demonstrating how the framework is implemented an illustrative example is presented, based on real data and a number of simplifying assumptions, which compares the impacts of a number of potential MCDW management strategies in New York City. Copyright © 2018 Elsevier Ltd. All rights reserved.
A monolithic mass tracking formulation for bubbles in incompressible flow
NASA Astrophysics Data System (ADS)
Aanjaneya, Mridul; Patkar, Saket; Fedkiw, Ronald
2013-08-01
We devise a novel method for treating bubbles in incompressible flow that relies on the conservative advection of bubble mass and an associated equation of state in order to determine pressure boundary conditions inside each bubble. We show that executing this algorithm in a traditional manner leads to stability issues similar to those seen for partitioned methods for solid-fluid coupling. Therefore, we reformulate the problem monolithically. This is accomplished by first proposing a new fully monolithic approach to coupling incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions, and then subsequently making a number of simplifying assumptions on the air flow removing not only the nonlinearities but also the spatial variations of both the density and the pressure. The resulting algorithm is quite robust, has been shown to converge to known solutions for test problems, and has been shown to be quite effective on more realistic problems including those with multiple bubbles, merging and pinching, etc. Notably, this approach departs from a standard two-phase incompressible flow model where the air flow preserves its volume despite potentially large forces and pressure differentials in the surrounding incompressible fluid that should change its volume. Our bubbles readily change volume according to an isothermal equation of state.
Non-formal learning and tacit knowledge in professional work.
Eraut, M
2000-03-01
This paper explores the conceptual and methodological problems arising from several empirical investigations of professional education and learning in the workplace. 1. To clarify the multiple meanings accorded to terms such as 'non-formal learning', 'implicit learning' and 'tacit knowledge', their theoretical assumptions and the range of phenomena to which they refer. 2. To discuss their implications for professional practice. A largely theoretical analysis of issues and phenomena arising from empirical investigations. The author's typology of non-formal learning distinguishes between implicit learning, reactive on-the-spot learning and deliberative learning. The significance of the last is commonly overemphasized. The problematic nature of tacit knowledge is discussed with respect to both detecting it and representing it. Three types of tacit knowledge are discussed: tacit understanding of people and situations, routinized actions and the tacit rules that underpin intuitive decision-making. They come together when professional performance involves sequences of routinized action punctuated by rapid intuitive decisions based on tacit understanding of the situation. Four types of process are involved--reading the situation, making decisions, overt activity and metacognition--and three modes of cognition--intuitive, analytic and deliberative. The balance between these modes depends on time, experience and complexity. Where rapid action dominates, periods of deliberation are needed to maintain critical control. Finally the role of both formal and informal social knowledge is discussed; and it is argued that situated learning often leads not to local conformity but to greater individual variation as people's careers take them through a series of different contexts. This abstract necessarily simplifies a more complex analysis in the paper itself.
Publish unexpected results that conflict with assumptions
USDA-ARS?s Scientific Manuscript database
Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-05-15
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.
Of mental models, assumptions and heuristics: The case of acids and acid strength
NASA Astrophysics Data System (ADS)
McClary, Lakeisha Michelle
This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seppala, L G
2000-09-15
A glass-choice strategy, based on separately designing an achromatic lens before progressing to an apochromatic lens, simplified my approach to solving the International Optical Design Conference (IODC) 1998 lens design problem. The glasses that are needed to make the lens apochromatic are combined into triplet correctors with two ''buried'' surfaces. By applying this strategy, I reached successful solutions that used only six glasses--three glasses for the achromatic design and three additional glasses for the apochromatic design.
specification was achieved by simplifying and improving the basic Bendix dosimeter design, using plastics for component parts, minimizing direct labor, and making the instrument suitable for automated processing and assembly. (Author)
Elf, Johan
2016-04-27
A new, game-changing approach makes it possible to rigorously disprove models without making assumptions about the unknown parts of the biological system. Copyright © 2016 Elsevier Inc. All rights reserved.
Unique Results and Lessons Learned from the TSS Missions
NASA Technical Reports Server (NTRS)
Stone, Nobie H.
2016-01-01
In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. The OML equation for electron current collection by a positively biased body is simply: I is approximately equal to A x j(sub eo) x 2/v??(phi)(exp ½) where A is the area of the body and phi is the electric potential on the body with respect to the plasma. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.
Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto
2018-02-08
The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Study on low intensity aeration oxygenation model and optimization for shallow water
NASA Astrophysics Data System (ADS)
Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi
2018-02-01
Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Self-Consistent Hydrodynamical Models For Stellar Winds
NASA Astrophysics Data System (ADS)
Boulangier, Jels; Homan, Ward; van Marle, Allard Jan; Decin, Leen; de Koter, Alex
2016-07-01
The physical and chemical conditions in the atmosphere of pulsating AGB stars are not well understood. In order to properly model this region, which is packed with shocks arisen from the pulsational behaviour of the star, we aim to understand the interplay between spatial and temporal changes in both the chemical composition and the hydro/thermodynamical behaviour inside these regions. Ideal models require the coupling of hydrodynamics, chemistry and radiative transfer, in three dimensions. As this is computationally not yet feasible, we aim to model this zone via a bottom-up approach. At first, we build correct 3D hydrodynamical set-up without any cooling or heating. Omitting cooling hampers the mass-loss of the AGB star within the reasonable confines of a realistic parameter space. Introducing cooling will decrease the temperature gradients in the atmosphere, counteracting the mass-loss even more. However, cooling also ensures the existence of regions where the temperature is low enough for the formation of dust to take place. This dust will absorb the momentum of the impacting photons from the AGB photosphere, accelerate outward and collide with the obstructing gas, dragging it along. Moreover, since chemistry, nucleation and dust formation depend critically on the temperature structure of the circumstellar environment, it is of utmost importance to include all relevant heating/cooling sources. Efforts to include cooling have been undertaken in the last decades, making use of different radiative cooling mechanisms for several chemical species, with some simplified radiative transfer. However, often the chemical composition of these 1D atmosphere models is fixed, implying the very strong assumption of chemical equilibrium, which is not at all true for a pulsating AGB atmosphere. We wish to model these atmospheres making as few assumptions as possible on equilibrium conditions. Therefore, as a first step, we introduce H2 dissociative cooling to the hydrodynamical model, arguing this is the dominant cooling factor. Using dissociative H2 cooling allows the ratio of the H-H2 gas mixture to vary, making the cooling efficiency time and space dependent. This will affect local cooling, in turn affecting the hydrodynamics and chemical composition, hereby introducing a feedback loop. Secondly, most significant radiative heating/cooling sources will be introduced to obtain the most realistic temperature structure. Next, dust acceleration will be introduced in the regions cool enough for dust condensation to exists. Hereby laying the basis of our hydrodynamical chemistry model for stellar winds of evolved stars.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D
2017-01-01
Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McVicker, J.P.; Conner, J.T.; Hasrouni, P.N.
1995-11-01
In-Core Instrumentation (ICI) assemblies located on a Reactor Pressure Vessel Head have a history of boric acid leakage. The acid tends to corrode the nuts and studs which fasten the flanges of the assembly, thereby compromising the assembly`s structural integrity. This paper provides a simplified practical approach in determining the likelihood of an undetected progressing assembly stud deterioration, which would lead to a catastrophic loss of reactor coolant. The structural behavior of the In-Core Instrumentation flanged assembly is modeled using an elastic composite section assumption, with the studs transmitting tension and the pressure sealing gasket experiencing compression. Using the abovemore » technique, one can calculate the flange relative deflection and the consequential coolant loss flow rate, as well as the stress in any stud. A solved real life example develops the expected failure sequence and discusses the exigency of leak detection for safe shutdown. In the particular case of Calvert Cliffs Nuclear Power Plant (CCNPP) it is concluded that leak detection occurs before catastrophic failure of the ICI flange assembly.« less
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William Monford
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
Wood, William Monford
2018-02-07
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
A simplified building airflow model for agent concentration prediction.
Jacques, David R; Smith, David A
2010-11-01
A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
NASA Astrophysics Data System (ADS)
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Zhao, Xiaoyan; Qin, Renjia
2015-04-01
This paper makes persuasive demonstrations on some problems about the human ear sound transmission principle in existing physiological textbooks and reference books, and puts forward the authors' view to make up for its literature. Exerting the knowledge of lever in physics and the acoustics theory, we come up with an equivalent simplified model of manubrium mallei which is to meet the requirements as the long arm of the lever. We also set up an equivalent simplified model of ossicular chain--a combination of levers of ossicular chain. We disassemble the model into two simple levers, and make full analysis and demonstration on them. Through the calculation and comparison of displacement amplitudes in both external auditory canal air and internal ear lymph, we may draw a conclusion that the key reason, which the sound displacement amplitude is to be decreased to adapt to the endurance limit of the basement membrane, is that the density and sound speed in lymph is much higher than those in the air.
NASA Technical Reports Server (NTRS)
Gordon, Diana F.
1992-01-01
Selecting a good bias prior to concept learning can be difficult. Therefore, dynamic bias adjustment is becoming increasingly popular. Current dynamic bias adjustment systems, however, are limited in their ability to identify erroneous assumptions about the relationship between the bias and the target concept. Without proper diagnosis, it is difficult to identify and then remedy faulty assumptions. We have developed an approach that makes these assumptions explicit, actively tests them with queries to an oracle, and adjusts the bias based on the test results.
ERIC Educational Resources Information Center
Khader, Patrick H.; Pachur, Thorsten; Meier, Stefanie; Bien, Siegfried; Jost, Kerstin; Rosler, Frank
2011-01-01
Many of our daily decisions are memory based, that is, the attribute information about the decision alternatives has to be recalled. Behavioral studies suggest that for such decisions we often use simple strategies (heuristics) that rely on controlled and limited information search. It is assumed that these heuristics simplify decision-making by…
ERIC Educational Resources Information Center
Rosner, Burton S.; Kochanski, Greg
2009-01-01
Signal detection theory (SDT) makes the frequently challenged assumption that decision criteria have no variance. An extended model, the Law of Categorical Judgment, relaxes this assumption. The long accepted equation for the law, however, is flawed: It can generate negative probabilities. The correct equation, the Law of Categorical Judgment…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-08
... DEPARTMENT OF JUSTICE [Docket No. OTJ 100] Solicitation of Comments on Request for United States Assumption of Concurrent Federal Criminal Jurisdiction; Hoopa Valley Tribe Correction In notice document 2012-09731 beginning on page 24517 the issue of Tuesday, April 24, 2012 make the following correction: On...
The Effect of Missing Data Treatment on Mantel-Haenszel DIF Detection
ERIC Educational Resources Information Center
Emenogu, Barnabas C.; Falenchuk, Olesya; Childs, Ruth A.
2010-01-01
Most implementations of the Mantel-Haenszel differential item functioning procedure delete records with missing responses or replace missing responses with scores of 0. These treatments of missing data make strong assumptions about the causes of the missing data. Such assumptions may be particularly problematic when groups differ in their patterns…
Causal Models with Unmeasured Variables: An Introduction to LISREL.
ERIC Educational Resources Information Center
Wolfle, Lee M.
Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-25
... actuarial and economic assumptions and methods by which Trustees might more accurately project health... (a)(2)). The Panel will discuss the long range (75 year) projection methods and assumptions in... making recommendations to the Medicare Trustees on how the Trustees might more accurately project health...
Social-Psychological Factors Influencing Recreation Demand: Evidence from Two Recreational Rivers
ERIC Educational Resources Information Center
Smith, Jordan W.; Moore, Roger L.
2013-01-01
Traditional methods of estimating demand for recreation areas involve making inferences about individuals' preferences. Frequently, the assumption is made that recreationists' cost of traveling to a site is a reliable measure of the value they place on that resource and the recreation opportunities it provides. This assumption may ignore other…
Temporal Aggregation and Testing For Timber Price Behavior
Jeffrey P. Prestemon; John M. Pye; Thomas P. Holmes
2004-01-01
Different harvest timing models make different assumptions about timber price behavior. Those seeking to optimize harvest timing are thus first faced with a decision regarding which assumption of price behavior is appropriate for their market, particularly regarding the presence of a unit root in the timber price time series. Unfortunately for landowners and investors...
Globalization, decision making and taboo in nursing.
Keighley, T
2012-06-01
This paper is a reflection on the representation of nurses and their practice at a global level. In considering the International Council of Nurses (ICN) conference in Malta (2011), it is clear that certain assumptions have been made about nurses and their practice which assume that globalization is under way for the whole of the profession and that the assumptions can be applied equally around the world. These assumptions appear in many ways to be implicit rather than explicit. The implicitness of the assumptions is examined against the particular decision-making processes adopted by the ICN. An attempt is then made to identify another base for the ongoing global work of the ICN. This involves the exploration of taboo (that which is forbidden because it is either holy or unclean) as a way of examining why nursing is not properly valued, despite years of international representation. The paper concludes with some thoughts on how such a new approach interfaces with the possibilities held out by new information technologies. © 2011 The Author. International Nursing Review © 2011 International Council of Nurses.
Ashby, Nathaniel J S; Glöckner, Andreas; Dickert, Stephan
2011-01-01
Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one's options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of Unconscious Thought Theory using a classic risky choice paradigm and including a "deliberation with information" condition. Although we replicate an advantage for unconscious thought (UT) over "deliberation without information," we find that "deliberation with information" equals or outperforms UT in risky choices. These results speak against the generality of the assumption that UT has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. Furthermore, we show that "deliberate thought with information" leads to more differentiated knowledge compared to UT which speaks against the generality of the appropriate weighting assumption.
NASA Astrophysics Data System (ADS)
Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst
2017-11-01
Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.
Ambient mass density effects on the International Space Station (ISS) microgravity experiments
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.; Smith, R. E.
1996-01-01
The Marshall engineering thermosphere model was specified by NASA to be used in the design, development and testing phases of the International Space Station (ISS). The mass density is the atmospheric parameter which most affects the ISS. Under simplifying assumptions, the critical ambient neutral density required to produce one micro-g on the ISS is estimated using an atmospheric drag acceleration equation. Examples are presented for the critical density versus altitude, and for the critical density that is exceeded at least once a month and once per orbit during periods of low and high solar activity. An analysis of the ISS orbital decay is presented.
NASA Astrophysics Data System (ADS)
Akbar, Noreen Sher
2016-03-01
The peristaltic flow of an incompressible viscous fluid containing copper nanoparticles in an asymmetric channel is discussed with thermal and velocity slip effects. The copper nanoparticles for the peristaltic flow water as base fluid is not explored so far. The equations for the purposed fluid model are developed first time in literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been calculated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The influence of various flow parameters on the flow and heat transfer characteristics is obtained.
Metachronal wave analysis for non-Newtonian fluid under thermophoresis and Brownian motion effects
NASA Astrophysics Data System (ADS)
Shaheen, A.; Nadeem, S.
This paper analyse the mathematical model of ciliary motion in an annulus. The effect of convective heat transfer and nanoparticle are taken into account. The governing equations of Jeffrey six-constant fluid along with heat and nanoparticle are modelled and then simplified by using long wavelength and low Reynolds number assumptions. The reduced equations are solved with the help of homotopy perturbation method. The obtained expressions for the velocity, temperature and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves. Streamlines has also been plotted at the last part of the paper.
NASA Astrophysics Data System (ADS)
Akbar, Noreen Sher; Butt, Adil Wahid
2015-05-01
In the present paper magnetic field effects for copper nanoparticles for blood flow through composite stenosis in arteries with permeable wall are discussed. The copper nanoparticles for the blood flow with water as base fluid is not explored yet. The equations for the Cu-water nanofluid are developed first time in the literature and simplified using long wavelength and low Reynolds number assumptions. Exact solutions have been evaluated for velocity, pressure gradient, the solid volume fraction of the nanoparticles and temperature profile. The effect of various flow parameters on the flow and heat transfer characteristics is utilized.
The span as a fundamental factor in airplane design
NASA Technical Reports Server (NTRS)
Lachmann, G
1928-01-01
Previous theoretical investigations of steady curvilinear flight did not afford a suitable criterion of "maneuverability," which is very important for judging combat, sport and stunt-flying airplanes. The idea of rolling ability, i.e., of the speed of rotation of the airplane about its X axis in rectilinear flight at constant speed and for a constant, suddenly produced deflection of the ailerons, is introduced and tested under simplified assumptions for the air-force distribution over the span. This leads to the following conclusions: the effect of the moment of inertia about the X axis is negligibly small, since the speed of rotation very quickly reaches a uniform value.
Multimodal far-field acoustic radiation pattern: An approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1977-01-01
The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.
Actin-based propulsion of a microswimmer.
Leshansky, A M
2006-07-01
A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.
NASA Astrophysics Data System (ADS)
Govindarajan, A.; Vijayalakshmi, R.; Ramamurthy, V.
2018-04-01
The main aim of this article is to study the combined effects of heat and mass transfer to radiative Magneto Hydro Dynamics (MHD) oscillatory optically thin dusty fluid in a saturated porous medium channel. Based on certain assumptions, the momentum, energy, concentration equations are obtained.The governing equations are non-dimensionalised, simplified and solved analytically. The closed analytical form solutions for velocity, temperature, concentration profiles are obtained. Numerical computations are presented graphically to show the salient features of various physical parameters. The shear stress, the rate of heat transfer and the rate of mass transfer are also presented graphically.
Efficiency gain from elastic optical networks
NASA Astrophysics Data System (ADS)
Morea, Annalisa; Rival, Olivier
2011-12-01
We compare the cost-efficiency of optical networks based on mixed datarates (10, 40, 100Gb/s) and datarateelastic technologies. A European backbone network is examined under various traffic assumptions (volume of transported data per demand and total number of demands) to better understand the impact of traffic characteristics on cost-efficiency. Network dimensioning is performed for static and restorable networks (resilient to one-link failure). In this paper we will investigate the trade-offs between price of interfaces, reach and reconfigurability, showing that elastic solutions can be more cost-efficient than mixed-rate solutions because of the better compatibility between different datarates, increased reach of channels and simplified wavelength allocation.
A Module Language for Typing by Contracts
NASA Technical Reports Server (NTRS)
Glouche, Yann; Talpin, Jean-Pierre; LeGuernic, Paul; Gautier, Thierry
2009-01-01
Assume-guarantee reasoning is a popular and expressive paradigm for modular and compositional specification of programs. It is becoming a fundamental concept in some computer-aided design tools for embedded system design. In this paper, we elaborate foundations for contract-based embedded system design by proposing a general-purpose module language based on a Boolean algebra allowing to define contracts. In this framework, contracts are used to negotiate the correctness of assumptions made on the definition of a component at the point where it is used and provides guarantees to its environment. We illustrate this presentation with the specification of a simplified 4-stroke engine model.
Centrifugal inertia effects in two-phase face seal films
NASA Technical Reports Server (NTRS)
Basu, P.; Hughes, W. F.; Beeler, R. M.
1987-01-01
A simplified, semianalytical model has been developed to analyze the effect of centrifugal inertia in two-phase face seals. The model is based on the assumption of isothermal flow through the seal, but at an elevated temperature, and takes into account heat transfer and boiling. Using this model, seal performance curves are obtained with water as the working fluid. It is shown that the centrifugal inertia of the fluid reduces the load-carrying capacity dramatically at high speeds and that operational instability exists under certain conditions. While an all-liquid seal may be starved at speeds higher than a 'critical' value, leakage always occurs under boiling conditions.
Analysis of environmental regulatory proposals: Its your chance to influence policy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veil, J.A.
1994-03-02
As part of the regulatory development process, the US Envirorunental Protection Agency (EPA) collects data, makes various assumptions about the data, and analyzes the data. Although EPA acts in good faith, the agency cannot always be aware of all relevant data, make only appropriate assumptions, and use applicable analytical methods. Regulated industries must carefully must carefully review every component of the regulatory decision-making process to identify misunderstandings and errors and to supply additional data that is relevant to the regulatory action. This paper examines three examples of how EPA`s data, assumptions, and analytical methods have been critiqued. The first twomore » examples involve EPA`s cost-effectiveness (CE) analyses prepared for the offshore oil and gas effluent limitations guidelines and as part of EPA Region 6`s general permit for coastal waters of Texas and Louisiana. A CE analysis regulations to the incremental amount of pollutants that would be removed by the recommended treatment processes. The third example, although not involving a CE analysis, demonstrates how the use of non-representative data can influence the outcome of an analysis.« less
Nonrational Processes in Ethical Decision Making
ERIC Educational Resources Information Center
Rogerson, Mark D.; Gottlieb, Michael C.; Handelsman, Mitchell M.; Knapp, Samuel; Younggren, Jeffrey
2011-01-01
Most current ethical decision-making models provide a logical and reasoned process for making ethical judgments, but these models are empirically unproven and rely upon assumptions of rational, conscious, and quasi-legal reasoning. Such models predominate despite the fact that many nonrational factors influence ethical thought and behavior,…
Trachet, Bram; Bols, Joris; De Santis, Gianluca; Vandenberghe, Stefaan; Loeys, Bart; Segers, Patrick
2011-12-01
Computational fluid dynamics (CFD) simulations allow for calculation of a detailed flow field in the mouse aorta and can thus be used to investigate a potential link between local hemodynamics and disease development. To perform these simulations in a murine setting, one often needs to make assumptions (e.g. when mouse-specific boundary conditions are not available), but many of these assumptions have not been validated due to a lack of reference data. In this study, we present such a reference data set by combining high-frequency ultrasound and contrast-enhanced micro-CT to measure (in vivo) the time-dependent volumetric flow waveforms in the complete aorta (including seven major side branches) of 10 male ApoE -/- deficient mice on a C57Bl/6 background. In order to assess the influence of some assumptions that are commonly applied in literature, four different CFD simulations were set up for each animal: (i) imposing the measured volumetric flow waveforms, (ii) imposing the average flow fractions over all 10 animals, presented as a reference data set, (iii) imposing flow fractions calculated by Murray's law, and (iv) restricting the geometrical model to the abdominal aorta (imposing measured flows). We found that - even if there is sometimes significant variation in the flow fractions going to a particular branch - the influence of using average flow fractions on the CFD simulations is limited and often restricted to the side branches. On the other hand, Murray's law underestimates the fraction going to the brachiocephalic trunk and strongly overestimates the fraction going to the distal aorta, influencing the outcome of the CFD results significantly. Changing the exponential factor in Murray's law equation from 3 to 2 (as suggested by several authors in literature) yields results that correspond much better to those obtained imposing the average flow fractions. Restricting the geometrical model to the abdominal aorta did not influence the outcome of the CFD simulations. In conclusion, the presented reference dataset can be used to impose boundary conditions in the mouse aorta in future studies, keeping in mind that they represent a subsample of the total population, i.e., relatively old, non-diseased, male C57Bl/6 ApoE -/- mice.
Kuchta, Shawn R.; Brown, Ashley D.; Converse, Paul E.; Highton, Richard
2016-01-01
Species are a fundamental unit of biodiversity, yet can be challenging to delimit objectively. This is particularly true of species complexes characterized by high levels of population genetic structure, hybridization between genetic groups, isolation by distance, and limited phenotypic variation. Previous work on the Cumberland Plateau Salamander, Plethodon kentucki, suggested that it might constitute a species complex despite occupying a relatively small geographic range. To examine this hypothesis, we sampled 135 individuals from 43 populations, and used four mitochondrial loci and five nuclear loci (5693 base pairs) to quantify phylogeographic structure and probe for cryptic species diversity. Rates of evolution for each locus were inferred using the multidistribute package, and time calibrated gene trees and species trees were inferred using BEAST 2 and *BEAST 2, respectively. Because the parameter space relevant for species delimitation is large and complex, and all methods make simplifying assumptions that may lead them to fail, we conducted an array of analyses. Our assumption was that strongly supported species would be congruent across methods. Putative species were first delimited using a Bayesian implementation of the GMYC model (bGMYC), Geneland, and Brownie. We then validated these species using the genealogical sorting index and BPP. We found substantial phylogeographic diversity using mtDNA, including four divergent clades and an inferred common ancestor at 14.9 myr (95% HPD: 10.8–19.7 myr). By contrast, this diversity was not corroborated by nuclear sequence data, which exhibited low levels of variation and weak phylogeographic structure. Species trees estimated a far younger root than did the mtDNA data, closer to 1.0 myr old. Mutually exclusive putative species were identified by the different approaches. Possible causes of data set discordance, and the problem of species delimitation in complexes with high levels of population structure and introgressive hybridization, are discussed. PMID:26974148
The effect of a twin tunnel on the propagation of ground-borne vibration from an underground railway
NASA Astrophysics Data System (ADS)
Kuo, K. A.; Hunt, H. E. M.; Hussein, M. F. M.
2011-12-01
Accurate predictions of ground-borne vibration levels in the vicinity of an underground railway are greatly sought after in modern urban centres. Yet the complexity involved in simulating the underground environment means that it is necessary to make simplifying assumptions about this system. One such commonly made assumption is to ignore the effects of neighbouring tunnels, despite the fact that many underground railway lines consist of twin-bored tunnels, one for the outbound direction and one for the inbound direction. This paper presents a unique model for two tunnels embedded in a homogeneous, elastic fullspace. Each of these tunnels is subject to both known, dynamic train forces and dynamic cavity forces. The net forces acting on the tunnels are written as the sum of those tractions acting on the invert of a single tunnel, and those tractions that represent the motion induced by the neighbouring tunnel. By apportioning the tractions in this way, the vibration response of a two-tunnel system is written as a linear combination of displacement fields produced by a single-tunnel system. Using Fourier decomposition, forces are partitioned into symmetric and antisymmetric modenumber components to minimise computation times. The significance of the interactions between two tunnels is quantified by calculating the insertion gains, in both the vertical and horizontal directions, that result from the existence of a second tunnel. The insertion-gain results are shown to be localised and highly dependent on frequency, tunnel orientation and tunnel thickness. At some locations, the magnitude of these insertion gains is greater than 20 dB. This demonstrates that a high degree of inaccuracy exists in any surface vibration prediction model that includes only one of the two tunnels. This novel two-tunnel solution represents a significant contribution to the existing body of research into vibration from underground railways, as it shows that the second tunnel has a significant influence on the accuracy of vibration predictions for underground railways.
An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations
NASA Technical Reports Server (NTRS)
Shidner, Jeremy D.
2016-01-01
The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using the analysis from the Mars Science Laboratory's terminal descent sensing model. Alternate uses will also be shown for determining horizon maps and orbiter set times.
Motion of small bodies in classical field theory
NASA Astrophysics Data System (ADS)
Gralla, Samuel E.
2010-04-01
I show how prior work with R. Wald on geodesic motion in general relativity can be generalized to classical field theories of a metric and other tensor fields on four-dimensional spacetime that (1) are second-order and (2) follow from a diffeomorphism-covariant Lagrangian. The approach is to consider a one-parameter-family of solutions to the field equations satisfying certain assumptions designed to reflect the existence of a body whose size, mass, and various charges are simultaneously scaled to zero. (That such solutions exist places a further restriction on the class of theories to which our results apply.) Assumptions are made only on the spacetime region outside of the body, so that the results apply independent of the body’s composition (and, e.g., black holes are allowed). The worldline “left behind” by the shrinking, disappearing body is interpreted as its lowest-order motion. An equation for this worldline follows from the “Bianchi identity” for the theory, without use of any properties of the field equations beyond their being second-order. The form of the force law for a theory therefore depends only on the ranks of its various tensor fields; the detailed properties of the field equations are relevant only for determining the charges for a particular body (which are the “monopoles” of its exterior fields in a suitable limiting sense). I explicitly derive the force law (and mass-evolution law) in the case of scalar and vector fields, and give the recipe in the higher-rank case. Note that the vector force law is quite complicated, simplifying to the Lorentz force law only in the presence of the Maxwell gauge symmetry. Example applications of the results are the motion of “chameleon” bodies beyond the Newtonian limit, and the motion of bodies in (classical) non-Abelian gauge theory. I also make some comments on the role that scaling plays in the appearance of universality in the motion of bodies.
Vertical gradients and seasonal variation in stem CO2 efflux within a Norway spruce stand.
Tarvainen, Lasse; Räntfors, Mats; Wallin, Göran
2014-05-01
Stem CO2 efflux is known to vary seasonally and vertically along tree stems. However, annual tree- and stand-scale efflux estimates are commonly based on measurements made only a few times a year, during daytime and at breast height. In this study, the effect of these simplifying assumptions on annual efflux estimates and their influence on the estimates of the importance of stems in stand-scale carbon cycling are evaluated. In order to assess the strength of seasonal, diurnal and along-stem variability in CO2 efflux, half-hourly measurements were carried out at three heights on three mature Norway spruce (Picea abies (L.) Karst.) trees over a period of 3 years. Making the common assumption of breast height efflux rates being representative of the entire stem was found to result in underestimations of 10-17% in the annual tree-scale CO2 efflux. Upscaling using only daytime measurements from breast height increased the underestimation to 15-20%. Furthermore, the results show that the strength of the vertical gradient varies seasonally, being strongest in the early summer and non-existent during the cool months. The observed seasonality in the vertical CO2 efflux gradient could not be explained by variation in stem temperature, temperature response of the CO2 efflux (Q10), outer-bark permeability, CO2 transport in the xylem or CO2 release from the phloem. However, the estimated CO2 concentration immediately beneath the bark was considerably higher in the upper stem during the main period of diameter growth, coinciding with the strongest vertical efflux gradient. These results suggest that higher growth rates in the upper stem are the main cause for the observed vertical variation in the stem CO2 effluxes. Furthermore, the results indicate that accounting for the vertical efflux variation is essential for assessments of the importance of stems in stand-scale carbon cycling. © The Author 2014. Published by Oxford University Press. All rights reserved.
NASA Astrophysics Data System (ADS)
Stackhouse, Paul; Wong, Takmeng; Kratz, David; Gupta, Shashi; Wiber, Anne; Edwards, Anne
2010-05-01
The FLASHFlux (Fast Longwave and Shortwave radiative Fluxes from CERES and MODIS) project derives daily averaged gridded top-of-atmosphere (TOA) and surface radiative fluxes within one week of observation. Production of CERES based TOA and surface fluxes is achieved by using the latest CERES calibration that is assumed constant in time and by making simplifying assumptions in the computation of time and space averaged quantities. Together these assumptions result in approximately a 1% increase in the uncertainty for FLASHFlux products over CERES. Analysis has clearly demonstrated that the global-annual mean outgoing longwave radiation shows a decrease of ~0.75 Wm-2, from 2007 to 2008, while the global-annual mean reflected shortwave radiation shows a decrease of 0.14 Wm-2 over that same period. Thus, the combined longwave and shortwave changes have resulted in an increase of ~0.89 Wm-2 in net radiation into the Earth climate system in 2008. A time series of TOA fluxes was constructed from CERES EBAF, CERES ERBE-like and FLASHFLUX. Relative to this multi-dataset average from 2001 to 2008, the 2008 global-annual mean anomalies are -0.54/-0.26/+0.80 Wm-2, respectively, for the longwave/shortwave/net radiation. These flux values, which were published in the NOAA 2008 State of the Climate Report, are within their corresponding 2-sigma interannual variabilities for this period. This paper extends these results through 2009, where the net flux is observed to recover. The TOA LW variability is also compared to AIRS OLR showing excellent agreement in the anomalies. The variability appears very well correlated to the to the 2007-2009 La Nina/El Nino cycles, which altered the global distribution of clouds, total column water vapor and temperature. Reassessments of these results are expected when newer Clouds and the Earth's Radiant Energy System (CERES) data are released.
On the combinatorics of sparsification.
Huang, Fenix Wd; Reidys, Christian M
2012-10-22
We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.
Heterosexual assumptions in verbal and non-verbal communication in nursing.
Röndahl, Gerd; Innala, Sune; Carlsson, Marianne
2006-11-01
This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they communicate through their language and behaviour.
Developing and Teaching Ethical Decision Making Skills.
ERIC Educational Resources Information Center
Robinson, John
1991-01-01
Student leaders and campus activities professionals can use a variety of techniques to help college students develop skill in ethical decision making, including teaching about the decision-making process, guiding students through decisions with a series of questions, playing ethics games, exploring assumptions, and best of all, role modeling. (MSE)
Multiscale Molecular Dynamics Model for Heterogeneous Charged Systems
NASA Astrophysics Data System (ADS)
Stanton, L. G.; Glosli, J. N.; Murillo, M. S.
2018-04-01
Modeling matter across large length scales and timescales using molecular dynamics simulations poses significant challenges. These challenges are typically addressed through the use of precomputed pair potentials that depend on thermodynamic properties like temperature and density; however, many scenarios of interest involve spatiotemporal variations in these properties, and such variations can violate assumptions made in constructing these potentials, thus precluding their use. In particular, when a system is strongly heterogeneous, most of the usual simplifying assumptions (e.g., spherical potentials) do not apply. Here, we present a multiscale approach to orbital-free density functional theory molecular dynamics (OFDFT-MD) simulations that bridges atomic, interionic, and continuum length scales to allow for variations in hydrodynamic quantities in a consistent way. Our multiscale approach enables simulations on the order of micron length scales and 10's of picosecond timescales, which exceeds current OFDFT-MD simulations by many orders of magnitude. This new capability is then used to study the heterogeneous, nonequilibrium dynamics of a heated interface characteristic of an inertial-confinement-fusion capsule containing a plastic ablator near a fuel layer composed of deuterium-tritium ice. At these scales, fundamental assumptions of continuum models are explored; features such as the separation of the momentum fields among the species and strong hydrogen jetting from the plastic into the fuel region are observed, which had previously not been seen in hydrodynamic simulations.
I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Mead, C.; Anbar, A. D.
2016-12-01
Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.
ERIC Educational Resources Information Center
Hoban, Garry; Nielsen, Wendy
2010-01-01
"Slowmation" (abbreviated from "Slow Animation") is a simplified way of making an animation that enables students to create their own as a new way of learning about a science concept. When students make a slowmation, they create a sequence of five multimodal representations (the 5 Rs) with each one contributing to the learning…
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
ERIC Educational Resources Information Center
Mellone, Maria
2011-01-01
Assumptions about the construction and the transmission of knowledge and about the nature of mathematics always underlie any teaching practice, even if often unconsciously. I examine the conjecture that theoretical tools suitably chosen can help the teacher to make such assumptions explicit and to support the teacher's reflection on his/her…
Fuels for urban transit buses: a cost-effectiveness analysis.
Cohen, Joshua T; Hammitt, James K; Levy, Jonathan I
2003-04-15
Public transit agencies have begun to adopt alternative propulsion technologies to reduce urban transit bus emissions associated with conventional diesel (CD) engines. Among the most popular alternatives are emission controlled diesel buses (ECD), defined here to be buses with continuously regenerating diesel particle filters burning low-sulfur diesel fuel, and buses burning compressed natural gas (CNG). This study uses a series of simplifying assumptions to arrive at first-order estimates for the incremental cost-effectiveness (CE) of ECD and CNG relative to CD. The CE ratio numerator reflects acquisition and operating costs. The denominator reflects health losses (mortality and morbidity) due to primary particulate matter (PM), secondary PM, and ozone exposure, measured as quality adjusted life years (QALYs). We find that CNG provides larger health benefits than does ECD (nine vs six QALYs annually per 1000 buses) but that ECD is more cost-effective than CNG (dollar 270 000 per QALY for ECD vs dollar 1.7 million to dollar 2.4 million for CNG). These estimates are subject to much uncertainty. We identify assumptions that contribute most to this uncertainty and propose potential research directions to refine our estimates.
Measuring the diffusion of linguistic change
Nerbonne, John
2010-01-01
We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335–357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question. PMID:21041207
Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-10-01
A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.
NASA Technical Reports Server (NTRS)
Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn;
2011-01-01
Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.
Measuring the diffusion of linguistic change.
Nerbonne, John
2010-12-12
We examine situations in which linguistic changes have probably been propagated via normal contact as opposed to via conquest, recent settlement and large-scale migration. We proceed then from two simplifying assumptions: first, that all linguistic variation is the result of either diffusion or independent innovation, and, second, that we may operationalize social contact as geographical distance. It is clear that both of these assumptions are imperfect, but they allow us to examine diffusion via the distribution of linguistic variation as a function of geographical distance. Several studies in quantitative linguistics have examined this relation, starting with Séguy (Séguy 1971 Rev. Linguist. Romane 35, 335-357), and virtually all report a sublinear growth in aggregate linguistic variation as a function of geographical distance. The literature from dialectology and historical linguistics has mostly traced the diffusion of individual features, however, so that it is sensible to ask what sort of dynamic in the diffusion of individual features is compatible with Séguy's curve. We examine some simulations of diffusion in an effort to shed light on this question.
Reviewed approach to defining the Active Interlock Envelope for Front End ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seletskiy, S.; Shaftan, T.
To protect the NSLS-II Storage Ring (SR) components from damage from synchrotron radiation produced by insertion devices (IDs) the Active Interlock (AI) keeps electron beam within some safe envelope (a.k.a Active Interlock Envelope or AIE) in the transverse phase space. The beamline Front Ends (FEs) are designed under assumption that above certain beam current (typically 2 mA) the ID synchrotron radiation (IDSR) fan is produced by the interlocked e-beam. These assumptions also define how the ray tracing for FE is done. To simplify the FE ray tracing for typical uncanted ID it was decided to provide the Mechanical Engineering groupmore » with a single set of numbers (x,x’,y,y’) for the AIE at the center of the long (or short) ID straight section. Such unified approach to the design of the beamline Front Ends will accelerate the design process and save valuable human resources. In this paper we describe our new approach to defining the AI envelope and provide the resulting numbers required for design of the typical Front End.« less
NASA Astrophysics Data System (ADS)
Abramov, Rafail V.
2018-06-01
For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. Under the simplifying assumption of constant temperature of the gas, the velocity profile becomes an explicit nonlinear function of the distance from the wall and exhibits a Knudsen boundary layer near the wall. To verify the validity of the obtained formula, we perform the Direct Simulation Monte Carlo computations for the shear flow of argon and nitrogen at normal density and temperature. We find excellent agreement between our velocity approximation and the computed DSMC velocity profiles both within the Knudsen boundary layer and away from it.
NASA Astrophysics Data System (ADS)
Martínez-Espiñeira, Roberto; Amoako-Tuffour, Joe
2009-06-01
One of the basic assumptions of the travel cost method for recreational demand analysis is that the travel cost is always incurred for a single purpose recreational trip. Several studies have skirted around the issue with simplifying assumptions and dropping observations considered as nonconventional holiday-makers or as nontraditional visitors from the sample. The effect of such simplifications on the benefit estimates remains conjectural. Given the remoteness of notable recreational parks, multi-destination or multi-purpose trips are not uncommon. This article examines the consequences of allocating travel costs to a recreational site when some trips were taken for purposes other than recreation and/or included visits to other recreational sites. Using a multi-purpose weighting approach on data from Gros Morne National Park, Canada, we conclude that a proper correction for multi-destination or multi-purpose trip is more of what is needed to avoid potential biases in the estimated effects of the price (travel-cost) variable and of the income variable in the trip generation equation.
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.
2018-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.
Stretching Screens and Imaginations.
ERIC Educational Resources Information Center
Douthwaite, Shelaugh
1983-01-01
Secondary students utilize a simplified technique to make silk screen prints, which can be printed onto T-shirts. The only materials needed from art suppliers are a few squeegees and a few yards of polyester screen mesh. (RM)
75 FR 42725 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... reduce grantee work burden and to diminish response ambiguity, to simplify data entry and analysis, and... items to make the form more user- friendly and diminish response ambiguity. Requests for copies of the...
Attention and choice: a review on eye movements in decision making.
Orquin, Jacob L; Mueller Loose, Simone
2013-09-01
This paper reviews studies on eye movements in decision making, and compares their observations to theoretical predictions concerning the role of attention in decision making. Four decision theories are examined: rational models, bounded rationality, evidence accumulation, and parallel constraint satisfaction models. Although most theories were confirmed with regard to certain predictions, none of the theories adequately accounted for the role of attention during decision making. Several observations emerged concerning the drivers and down-stream effects of attention on choice, suggesting that attention processes plays an active role in constructing decisions. So far, decision theories have largely ignored the constructive role of attention by assuming that it is entirely determined by heuristics, or that it consists of stochastic information sampling. The empirical observations reveal that these assumptions are implausible, and that more accurate assumptions could have been made based on prior attention and eye movement research. Future decision making research would benefit from greater integration with attention research. Copyright © 2013 Elsevier B.V. All rights reserved.
Temblor, an app focused on your seismic risk and how to reduce it
NASA Astrophysics Data System (ADS)
Stein, R. S.; Sevilgen, V.; Sevilgen, S.; Kim, A.; Madden, E.
2015-12-01
Half of the world's population lives near active faults, and so could suffer earthquake damage. Most do not know they are at risk; many of the rest do too little, too late. So, Temblor is intended to enable everyone in the United States, and eventually the world, to learn their seismic hazard, to determine what most ensures their safety, and to determine the risk reduction measures in their best financial interest. In our free web and mobile app, and Chrome extension for real estate websites, Temblor estimates the likelihood of seismic shaking from all quakes at their occurrence rates, and the consequences of the shaking for home damage. The app then shows how the damage or its costs could be decreased by buying or renting a seismically safer home, securing fragile objects inside your home, retrofitting an older home, or buying earthquake insurance. Temblor uses public data from the USGS in the U.S., SHARE in Europe, and the GEAR model (Bird et al, in press, BSSA) for the globe. Through publicly available modeling methods, the hazard data is combined with public data on homes (construction date and square footage) to make risk calculations. This means that Temblor's results are independently reproducible. The app makes many simplifying assumptions, but users can provide additional information on their site and home for refined estimates. Temblor also lets one see active faults and recent quakes on the screen as they drive through an area. Because fear tends to trigger either panic or denial, Temblor seeks to make the world of earthquakes more fascinating than frightening. We are neither scaring nor soothing people, but rather talking straight. Through maps, globes, push notifications, family connections, and costs and benefit estimates, Temblor emphasizes the personal, local, realtime, and most importantly, rational. Temblor's goal is to distill scientific and engineering information into lucid, trusted, and ideally actionable guidance to renters, home owners, and home buyers, so that we all live more safely in earthquake country.
The wisdom of deliberate mistakes.
Schoemaker, Paul J H; Gunther, Robert E
2006-06-01
Before the breakup of the Bell System, U.S. telephone companies were permitted by law to ask for security deposits from a small percentage of subscribers. The companies used statistical models to decide which customers were most likely to pay their bills late and thus should be charged a deposit, but no one knew whether the models were right. So the Bell companies made a deliberate mistake. They asked for no deposit from nearly 100,000 new customers randomly selected from among those who were considered high risks. Surprisingly, quite a few paid their bills on time. As a result, the companies instituted a smarter screening strategy, which added millions to the Bell System's bottom line. Usually, individuals and organizations go to great lengths to avoid errors. Companies are designed for optimum performance rather than for learning, and mistakes are seen as defects. But as the Bell System example shows, making mistakes--correctly--is a powerful way to accelerate learning and increase competitiveness. If one of a company's fundamental assumptions is wrong, the firm can achieve success more quickly by deliberately making errors than by considering only data that support the assumption. Moreover, executives who apply a conventional, systematic approach to solving a pattern recognition problem are often slower to find a solution than those who test their assumptions by knowingly making mistakes. How do you distinguish between smart mistakes and dumb ones? The authors' consulting firm has developed, and currently uses, a five-step process for identifying constructive mistakes. In one test, the firm assumed that a mistake it was planning to make would cost a significant amount of money, but the opposite happened. By turning assumptions on their heads, the firm created more than dollar 1 million in new business.
The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.
Meindl, Peter; Johnson, Kate M; Graham, Jesse
2016-04-01
Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.
NASA Astrophysics Data System (ADS)
Shipton, Z.; Caine, J. S.; Lunn, R. J.
2013-12-01
Geologists are tiny creatures living on the 2-and-a-bit-D surface of a sphere who observe essentially 1D vanishingly small portions (boreholes, roadcuts, stream and beach sections) of complex, 4D tectonic-scale structures. Field observations of fault zones are essential to understand the processes of fault growth and to make predictions of fault zone mechanical and hydraulic properties at depth. Here, we argue that a failure of geologists to communicate their knowledge effectively to other scientists/engineers can lead to unrealistic assumptions being made about fault properties, and may result in poor economic performance and a lack of robustness in industrial safety cases. Fault zones are composed of many heterogeneously distributed deformation-related elements. Low permeability features include regions of intense grain-size reduction, pressure solution, cementation and shale smears. Other elements are likely to have enhanced permeability through fractures and breccias. Slip surfaces can have either enhanced or reduced permeability depending on whether they are open or closed, and the local stress state. The highly variable nature of 1) the architecture of faults and 2) the properties of deformation-related elements demonstrates that there are many factors controlling the evolution of fault zone internal structures (fault architecture). The aim of many field studies of faults is to provide data to constrain predictions at depth. For these data to be useful, pooling of data from multiple sites is usually necessary. This effort is frequently hampered by variability in the usage of fault terminologies. In addition, these terms are often used in ways as to make it easy for 'end-users' such as petroleum reservoir engineers, mining geologists, and seismologists to mis-interpret or over-simplify the implications of field studies. Field geologists are comfortable knowing that if you walk along strike or up dip of a fault zone you will find variations in fault rock type, number and orientations of slip surfaces, variation in fracture density, relays, asperities, variable juxtaposition relationships etc. Problems can arise when "users" of structural geology try to apply models to general cases without understanding that these are simplified models. For example, when a section like the one in Chester and Logan 1996, gets projected infinitely into the third dimension along a fault the size of the San Andreas (seismology), or Shale Gouge Ratios are blindly applied to an Allen diagram without recognising that sub-seismic scale relays may provide "hidden" juxtapositions resulting in fluids bypassing low permeability fault cores. Phrases like 'low-permeability fault core and high-permeabilty damage zone' fail to appreciate fault zone complexity. Internicene arguments over the details of terminology that baffle the "end users" can make detailed field studies that characterise fault heterogeneity seem irrelevant. We argue that the field geology community needs to consider ways to make sure that we educate end-users to appropriate and cautious approaches to use of the data we provide with an appreciation of the uncertainties inherent in our limited ability to characterize 4D, tectonic structures, at the same time as understanding the value of carefully collected field data.
Effects of waveform model systematics on the interpretation of GW150914
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.
2017-05-01
Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.
Variational Ridging in Sea Ice Models
NASA Astrophysics Data System (ADS)
Roberts, A.; Hunke, E. C.; Lipscomb, W. H.; Maslowski, W.; Kamal, S.
2017-12-01
This work presents the results of a new development to make basin-scale sea ice models aware of the shape, porosity and extent of individual ridges within the pack. We have derived an analytic solution for the Euler-Lagrange equation of individual ridges that accounts for non-conservative forces, and therefore the compressive strength of individual ridges. Because a region of the pack is simply a collection of paths of individual ridges, we are able to solve the Euler-Lagrange equation for a large-scale sea ice field also, and therefore the compressive strength of a region of the pack that explicitly accounts for the macro-porosity of ridged debris. We make a number of assumptions that have simplified the problem, such as treating sea ice as a granular material in ridges, and assuming that bending moments associated with ridging are perturbations around an isostatic state. Regardless of these simplifications, the ridge model is remarkably predictive of macro-porosity and ridge shape, and, because our equations are analytic, they do not require costly computations to solve the Euler-Lagrange equation of ridges on the large scale. The new ridge model is therefore applicable to large-scale sea ice models. We present results from this theoretical development, as well as plans to apply it to the Regional Arctic System Model and a community sea ice code. Most importantly, the new ridging model is particularly useful for pinpointing gaps in our observational record of sea ice ridges, and points to the need for improved measurements of the evolution of porosity of deformed ice in the Arctic and Antarctic. Such knowledge is not only useful for improving models, but also for improving estimates of sea ice volume derived from altimetric measurements of sea ice freeboard.
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís
2015-07-15
Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation process challenges this longstanding assumption. In the present work, we use partial errors to better constrain sequential sampling models at the single-trial level. Copyright © 2015 the authors 0270-6474/15/3510371-15$15.00/0.
Simplified preparation of coniferyl and sinapyl alcohols.
Kim, Hoon; Ralph, John
2005-05-04
Coniferyl and sinapyl alcohols were prepared from commercially available coniferaldehyde and sinapaldehyde using borohydride exchange resin in methanol. This reduction is highly regioselective and exceptionally simple, making these valuable monolignols readily available to researchers lacking synthetic chemistry expertise.
Evaluation of a distributed catchment scale water balance model
NASA Technical Reports Server (NTRS)
Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.
1993-01-01
The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.
Operation Windshield and the simplification of emergency management.
Andrews, Michael
2016-01-01
Large, complex, multi-stakeholder exercises are the culmination of years of gradual progression through a comprehensive training and exercise programme. Exercises intended to validate training, refine procedures and test processes initially tested in isolation are combined to ensure seamless response and coordination during actual crises. The challenges of integrating timely and accurate situational awareness from an array of sources, including response agencies, municipal departments, partner agencies and the public, on an ever-growing range of media platforms, increase information management complexity in emergencies. Considering that many municipal emergency operations centre roles are filled by staff whose day jobs have little to do with crisis management, there is a need to simplify emergency management and make it more intuitive. North Shore Emergency Management has accepted the challenge of making emergency management less onerous to occasional practitioners through a series of initiatives aimed to build competence and confidence by making processes easier to use as well as by introducing technical tools that can simplify processes and enhance efficiencies. These efforts culminated in the full-scale earthquake exercise, Operation Windshield, which preceded the 2015 Emergency Preparedness and Business Continuity Conference in Vancouver, British Columbia.
A Signal-Detection Analysis of Fast-and-Frugal Trees
ERIC Educational Resources Information Center
Luan, Shenghua; Schooler, Lael J.; Gigerenzer, Gerd
2011-01-01
Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called "small world") and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called "large world"). Few…
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2014-01-01
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
Maintaining the Balance Between Manpower, Skill Levels, and PERSTEMPO
2006-01-01
requirement processes. Models and tools that integrate these dimensions would help crys- tallize issues, identify embedded assumptions , and surface...problems will change if the planning assumptions are incorrect or if the other systems are incapable of making the nec- essary adjustments. Static...Carrillo, Background and Theory Behind the Compensations, Accessions, and Personnel ( CAPM ) Model, Santa Monica, Calif.: RAND Corporation, MR-1667
ERIC Educational Resources Information Center
Ngai, Courtney; Sevian, Hannah; Talanquer, Vicente
2014-01-01
Given the diversity of materials in our surroundings, one should expect scientifically literate citizens to have a basic understanding of the core ideas and practices used to analyze chemical substances. In this article, we use the term 'chemical identity' to encapsulate the assumptions, knowledge, and practices upon which chemical…
Ma, Ruiqing; Kawamoto, Ken-Ichiro; Shinomori, Keizo
2016-03-01
We explored the color constancy mechanisms of color-deficient observers under red, green, blue, and yellow illuminations. The red and green illuminations were defined individually by the longer axis of the color discrimination ellipsoid measured by the Cambridge Colour Test. Four dichromats (3 protanopes and 1 deuteranope), two anomalous trichromats (2 deuteranomalous observers), and five color-normal observers were asked to complete the color constancy task by making a simultaneous paper match under asymmetrical illuminations in haploscopic view on a monitor. The von Kries adaptation model was applied to estimate the cone responses. The model fits showed that for all color-deficient observers under all illuminations, the adjustment of the S-cone response or blue-yellow chromatically opponent responses modeled with the simple assumption of cone deletion in a certain type (S-M, S-L or S-(L+M)) was consistent with the principle of the von Kries model. The degree of adaptation was similar to that of color-normal observers. The results indicate that the color constancy of color-deficient observers is mediated by the simplified blue-yellow color system with a von Kries-type adaptation effect, even in the case of brightness match, as well as by a possible cone-level adaptation to the S-cone when the illumination produces a strong S-cone stimulation, such as blue illumination.
Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.
When enough is enough: The worth of monitoring data in aquifer remediation design
NASA Astrophysics Data System (ADS)
James, Bruce R.; Gorelick, Steven M.
1994-12-01
Given the high cost of data collection at groundwater contamination remediation sites, it is becoming increasingly important to make data collection as cost-effective as possible. A Bayesian data worth framework is developed in an attempt to carry out this task for remediation programs in which a groundwater contaminant plume must be located and then hydraulically contained. The framework is applied to a hypothetical contamination problem where uncertainty in plume location and extent are caused by uncertainty in source location, source loading time, and aquifer heterogeneity. The goal is to find the optimum number and the best locations for a sequence of observation wells that minimize the expected cost of remediation plus sampling. Simplifying assumptions include steady state heads, advective transport, simple retardation, and remediation costs as a linear function of discharge rate. In the case here, an average of six observation wells was needed. Results indicate that this optimum number was particularly sensitive to the mean hydraulic conductivity. The optimum number was also sensitive to the variance of the hydraulic conductivity, annual discount rate, operating cost, and sample unit cost. It was relatively insensitive to the correlation length of hydraulic conductivity. For the case here, points of greatest uncertainty in plume presence were on average poor candidates for sample locations, and randomly located samples were not cost-effective.
NASA Astrophysics Data System (ADS)
Mekheimer, Kh. S.; Hasona, W. M.; Abo-Elkhair, R. E.; Zaher, A. Z.
2018-01-01
Cancer is dangerous and deadly to most of its patients. Recent studies have shown that gold nanoparticles can cure and overcome it, because these particles have a high atomic number which produce the heat and leads to treatment of malignancy tumors. A motivation of this article is to study the effect of heat transfer with the blood flow (non-Newtonian model) containing gold nanoparticles in a gap between two coaxial tubes, the outer tube has a sinusoidal wave traveling down its wall and the inner tube is rigid. The governing equations of third-grade fluid along with total mass, thermal energy and nanoparticles are simplified by using the assumption of long wavelength. Exact solutions have been evaluated for temperature distribution and nanoparticles concentration, while approximate analytical solutions are found for the velocity distribution using the regular perturbation method with a small third grade parameter. Influence of the physical parameters such as third grade parameter, Brownian motion parameter and thermophoresis parameter on the velocity profile, temperature distribution and nanoparticles concentration are considered. The results pointed to that the gold nanoparticles are effective for drug carrying and drug delivery systems because they control the velocity through the Brownian motion parameter Nb and thermophoresis parameter Nt. Gold nanoparticles also increases the temperature distribution, making it able to destroy cancer cells.
Treatment evolution and new standards of care: implications for cost-effectiveness analysis.
Shechter, Steven M
2011-01-01
Traditional approaches to cost-effectiveness analysis have not considered the downstream possibility of a new standard of care coming out of the research and development pipeline. However, the treatment landscape for patients may change significantly over the course of their lifetimes. To present a Markov modeling framework that incorporates the possibility of treatment evolution into the incremental cost-effectiveness ratio (ICER) that compares treatments available at the present time. . Markov model evaluated by matrix algebra. Measurements. The author evaluates the difference between the new and traditional ICER calculations for patients with chronic diseases facing a lifetime of treatment. The bias of the traditional ICER calculation may be substantial, with further testing revealing that it may be either positive or negative depending on the model parameters. The author also performs probabilistic sensitivity analyses with respect to the possible timing of a new treatment discovery and notes the increase in the magnitude of the bias when the new treatment is likely to appear sooner rather than later. Limitations. The modeling framework is intended as a proof of concept and therefore makes simplifying assumptions such as time stationarity of model parameters and consideration of a single new drug discovery. For diseases with a more active research and development pipeline, the possibility of a new treatment paradigm may be at least as important to consider in sensitivity analysis as other parameters that are often considered.
A phasor approach analysis of multiphoton FLIM measurements of three-dimensional cell culture models
NASA Astrophysics Data System (ADS)
Lakner, P. H.; Möller, Y.; Olayioye, M. A.; Brucker, S. Y.; Schenke-Layland, K.; Monaghan, M. G.
2016-03-01
Fluorescence lifetime imaging microscopy (FLIM) is a useful approach to obtain information regarding the endogenous fluorophores present in biological samples. The concise evaluation of FLIM data requires the use of robust mathematical algorithms. In this study, we developed a user-friendly phasor approach for analyzing FLIM data and applied this method on three-dimensional (3D) Caco-2 models of polarized epithelial luminal cysts in a supporting extracellular matrix environment. These Caco-2 based models were treated with epidermal growth factor (EGF), to stimulate proliferation in order to determine if FLIM could detect such a change in cell behavior. Autofluorescence from nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) in luminal Caco-2 cysts was stimulated by 2-photon laser excitation. Using a phasor approach, the lifetimes of involved fluorophores and their contribution were calculated with fewer initial assumptions when compared to multiexponential decay fitting. The phasor approach simplified FLIM data analysis, making it an interesting tool for non-experts in numerical data analysis. We observed that an increased proliferation stimulated by EGF led to a significant shift in fluorescence lifetime and a significant alteration of the phasor data shape. Our data demonstrates that multiphoton FLIM analysis with the phasor approach is a suitable method for the non-invasive analysis of 3D in vitro cell culture models qualifying this method for monitoring basic cellular features and the effect of external factors.
VizieR Online Data Catalog: Tracers of the Milky Way mass (Bratek+, 2014)
NASA Astrophysics Data System (ADS)
Bratek, L.; Sikora, S.; Jalocha, J.; Kutschera, M.
2013-11-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ~~150-200kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4x1011M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. (1 data file).
An exact solution of a simplified two-phase plume model. [for solid propellant rocket
NASA Technical Reports Server (NTRS)
Wang, S.-Y.; Roberts, B. B.
1974-01-01
An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.
Outline of cost-benefit analysis and a case study
NASA Technical Reports Server (NTRS)
Kellizy, A.
1978-01-01
The methodology of cost-benefit analysis is reviewed and a case study involving solar cell technology is presented. Emphasis is placed on simplifying the technique in order to permit a technical person not trained in economics to undertake a cost-benefit study comparing alternative approaches to a given problem. The role of economic analysis in management decision making is discussed. In simplifying the methodology it was necessary to restrict the scope and applicability of this report. Additional considerations and constraints are outlined. Examples are worked out to demonstrate the principles. A computer program which performs the computational aspects appears in the appendix.
Classification with spatio-temporal interpixel class dependency contexts
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David A.
1992-01-01
A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.
Ionic transport in high-energy-density matter
Stanton, Liam G.; Murillo, Michael S.
2016-04-08
Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less
Assessment of historical masonry pillars reinforced by CFRP strips
NASA Astrophysics Data System (ADS)
Fedele, Roberto; Rosati, Giampaolo; Biolzi, Luigi; Cattaneo, Sara
2014-10-01
In this methodological study, the ultimate response of masonry pillars strengthened by externally bonded Carbon Fiber Reinforced Polymer (CFRP) was investigated. Historical bricks were derived from a XVII century rural building, whilst a high strength mortar was utilized for the joints. The conventional experimental information, concerning the overall reaction force and relative displacements provided by "point" sensors (LVDTs and clip gauge), were herein enriched with no-contact, full-field kinematic measurements provided by 2D Digital Image Correlation (2D DIC). Experimental information were critically compared with prediction provided by an advanced three-dimensional models, based on nonlinear finite elements under the simplifying assumption of perfect adhesion between the reinforcement and the support.
The time-dependent response of 3- and 5-layer sandwich beams
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.
1992-01-01
Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.
NASA Astrophysics Data System (ADS)
Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis
2003-12-01
This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
Jayanti, R K
2001-01-01
Consumer information-processing theory provides a useful framework for policy makers concerned with regulating information provided by managed care organizations. The assumption that consumers are rational information processors and providing more information is better is questioned in this paper. Consumer research demonstrates that when faced with an uncertain decision, consumers adopt simplifying strategies leading to sub-optimal choices. A discussion on how consumers process risk information and the effects of various informational formats on decision outcomes is provided. Categorization theory is used to propose guidelines with regard to providing effective information to consumers choosing among competing managed care plans. Public policy implications borne out of consumer information-processing theory conclude the article.
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Watabiki, Y.
2017-12-01
We recently formulated a model of the universe based on an underlying W3-symmetry. It allows the creation of the universe from nothing and the creation of baby universes and wormholes for spacetimes of dimension 2, 3, 4, 6 and 10. Here we show that the classical large time and large space limit of these universes is one of exponential fast expansion without the need of a cosmological constant. Under a number of simplifying assumptions, our model predicts that w = ‑1.2 in the case of four-dimensional spacetime. The possibility of obtaining a w-value less than ‑1 is linked to the ability of our model to create baby universes and wormholes.
Towards realistic modelling of spectral line formation - lessons learnt from red giants
NASA Astrophysics Data System (ADS)
Lind, Karin
2015-08-01
Many decades of quantitative spectroscopic studies of red giants have revealed much about the formation histories and interlinks between the main components of the Galaxy and its satellites. Telescopes and instrumentation are now able to deliver high-resolution data of superb quality for large stellar samples and Galactic archaeology has entered a new era. At the same time, we have learnt how simplifying physical assumptions in the modelling of spectroscopic data can bias the interpretations, in particular one-dimensional homogeneity and local thermodynamic equilibrium (LTE). I will present lessons learnt so far from non-LTE spectral line formation in 3D radiation-hydrodynamic atmospheres of red giants, the smaller siblings of red supergiants.
Droplets size evolution of dispersion in a stirred tank
NASA Astrophysics Data System (ADS)
Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina
2018-06-01
Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
On firework blasts and qualitative parameter dependency.
Zohdi, T I
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.
NASA Technical Reports Server (NTRS)
Sakurai, Takashi; Goossens, Marcel; Hollweg, Joseph V.
1991-01-01
The present method of addressing the resonance problems that emerge in such MHD phenomena as the resonant absorption of waves at the Alfven resonance point avoids solving the fourth-order differential equation of dissipative MHD by recourse to connection formulae across the dissipation layer. In the second part of this investigation, the absorption of solar 5-min oscillations by sunspots is interpreted as the resonant absorption of sounds by a magnetic cylinder. The absorption coefficient is interpreted (1) analytically, under certain simplifying assumptions, and numerically, under more general conditions. The observed absorption coefficient magnitude is explained over suitable parameter ranges.
Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle
NASA Technical Reports Server (NTRS)
Ciepluch, Carl C.
1960-01-01
Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.
On firework blasts and qualitative parameter dependency
Zohdi, T. I.
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903
Parachute dynamics and stability analysis. [using nonlinear differential equations of motion
NASA Technical Reports Server (NTRS)
Ibrahim, S. K.; Engdahl, R. A.
1974-01-01
The nonlinear differential equations of motion for a general parachute-riser-payload system are developed. The resulting math model is then applied for analyzing the descent dynamics and stability characteristics of both the drogue stabilization phase and the main descent phase of the space shuttle solid rocket booster (SRB) recovery system. The formulation of the problem is characterized by a minimum number of simplifying assumptions and full application of state-of-the-art parachute technology. The parachute suspension lines and the parachute risers can be modeled as elastic elements, and the whole system may be subjected to specified wind and gust profiles in order to assess their effects on the stability of the recovery system.
CMG-Augmented Control of a Hovering VTOL Platform
NASA Technical Reports Server (NTRS)
Lim, K. B.; Moerder, D. D.
2007-01-01
This paper describes how Control Moment Gyroscopes (CMGs) can be used for stability augmentation to a thrust vectoring system for a generic Vertical Take-Off and Landing platform. The response characteristics of the platform which uses only thrust vectoring and a second configuration which includes a single-gimbal CMG array are simulated and compared for hovering flight while subject to severe air turbulence. Simulation results demonstrate the effectiveness of a CMG array in its ability to significantly reduce the agility requirement on the thrust vectoring system. Albeit simplifying physical assumptions on a generic CMG configuration, the numerical results also suggest that reasonably sized CMGs will likely be sufficient for a small hovering vehicle.
A practical method of predicting the loudness of complex electrical stimuli
NASA Astrophysics Data System (ADS)
McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.
2003-04-01
The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Determining Planck's Constant Using a Light-emitting Diode.
ERIC Educational Resources Information Center
Sievers, Dennis; Wilson, Alan
1989-01-01
Describes a method for making a simple, inexpensive apparatus which can be used to determine Planck's constant. Provides illustrations of a circuit diagram using one or more light-emitting diodes and a BASIC computer program for simplifying calculations. (RT)
End-of-life decision making is more than rational.
Eliott, Jaklin A; Olver, Ian N
2005-01-01
Most medical models of end-of-life decision making by patients assume a rational autonomous adult obtaining and deliberating over information to arrive at some conclusion. If the patient is deemed incapable of this, family members are often nominated as substitutes, with assumptions that the family are united and rational. These are problematic assumptions. We interviewed 23 outpatients with cancer about the decision not to resuscitate a patient following cardiopulmonary arrest and examined their accounts of decision making using discourse analytical techniques. Our analysis suggests that participants access two different interpretative repertoires regarding the construct of persons, invoking a 'modernist' repertoire to assert the appropriateness of someone, a patient or family, making a decision, and a 'romanticist' repertoire when identifying either a patient or family as ineligible to make the decision. In determining the appropriateness of an individual to make decisions, participants informally apply 'Sanity' and 'Stability' tests, assessing both an inherent ability to reason (modernist repertoire) and the presence of emotion (romanticist repertoire) which might impact on the decision making process. Failure to pass the tests respectively excludes or excuses individuals from decision making. The absence of the romanticist repertoire in dominant models of patient decision making has ethical implications for policy makers and medical practitioners dealing with dying patients and their families.
How to make a particular case for person-centred patient care: A commentary on Alexandra Parvan.
Graham, George
2018-06-14
In recent years, a person-centred approach to patient care in cases of mental illness has been promoted as an alternative to a disease orientated approach. Alexandra Parvan's contribution to the person-centred approach serves to motivate an exploration of the approach's most apt metaphysical assumptions. I argue that a metaphysical thesis or assumption about both persons and their uniqueness is an essential element of being person-centred. I apply the assumption to issues such as the disorder/disease distinction and to the continuity of mental health and illness. © 2018 John Wiley & Sons, Ltd.
A simplified genetic design for mammalian enamel
Snead, ML; Zhu, D; Lei, YP; Luo, W; Bringas, P.; Sucov, H.; Rauth, RJ; Paine, ML; White, SN
2011-01-01
A biomimetic replacement for tooth enamel is urgently needed because dental caries is the most prevalent infectious disease to affect man. Here, design specifications for an enamel replacement material inspired by Nature are deployed for testing in an animal model. Using genetic engineering we created a simplified enamel protein matrix precursor where only one, rather than dozens of amelogenin isoforms, contributed to enamel formation. Enamel function and architecture were unaltered, but the balance between the competing materials properties of hardness and toughness was modulated. While the other amelogenin isoforms make a modest contribution to optimal biomechanical design, the enamel made with only one amelogenin isoform served as a functional substitute. Where enamel has been lost to caries or trauma a suitable biomimetic replacement material could be fabricated using only one amelogenin isoform, thereby simplifying the protein matrix parameters by one order of magnitude. PMID:21295848
Simplified hydraulic model of French vertical-flow constructed wetlands.
Arias, Luis; Bertrand-Krajewski, Jean-Luc; Molle, Pascal
2014-01-01
Designing vertical-flow constructed wetlands (VFCWs) to treat both rain events and dry weather flow is a complex task due to the stochastic nature of rain events. Dynamic models can help to improve design, but they usually prove difficult to handle for designers. This study focuses on the development of a simplified hydraulic model of French VFCWs using an empirical infiltration coefficient--infiltration capacity parameter (ICP). The model was fitted using 60-second-step data collected on two experimental French VFCW systems and compared with Hydrus 1D software. The model revealed a season-by-season evolution of the ICP that could be explained by the mechanical role of reeds. This simplified model makes it possible to define time-course shifts in ponding time and outlet flows. As ponding time hinders oxygen renewal, thus impacting nitrification and organic matter degradation, ponding time limits can be used to fix a reliable design when treating both dry and rain events.
ERIC Educational Resources Information Center
Weinrich, M. L.; Talanquer, V.
2015-01-01
The central goal of this qualitative research study was to uncover major implicit assumptions that students with different levels of training in the discipline apply when thinking and making decisions about chemical reactions used to make a desired product. In particular, we elicited different ways of conceptualizing why chemical reactions happen…
Drichoutis, Andreas C.; Lusk, Jayson L.
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample. PMID:25029467
Drichoutis, Andreas C; Lusk, Jayson L
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.
NASA Astrophysics Data System (ADS)
Wallace, Maria F. G.
2018-03-01
Over the years neoliberal ideology and discourse have become intricately connected to making science people. Science educators work within a complicated paradox where they are obligated to meet neoliberal demands that reinscribe dominant, hegemonic assumptions for producing a scientific workforce. Whether it is the discourse of school science, processes of being a scientist, or definitions of science particular subjects are made intelligible as others are made unintelligible. This paper resides within the messy entanglements of feminist poststructural and new materialist perspectives to provoke spaces where science educators might enact ethicopolitical hesitations. By turning to and living in theory, the un/making of certain kinds of science people reveals material effects and affects. Practicing ethicopolitical hesitations prompt science educators to consider beginning their work from ontological assumptions that begin with abundance rather than lack.
Computational reacting gas dynamics
NASA Technical Reports Server (NTRS)
Lam, S. H.
1993-01-01
In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).
Bell violation using entangled photons without the fair-sampling assumption.
Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton
2013-05-09
The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.
An Evolving Worldview: Making Open Source Easy
NASA Technical Reports Server (NTRS)
Rice, Zachary
2017-01-01
NASA Worldview is an interactive interface for browsing full-resolution, global satellite imagery. Worldview supports an open data policy so that academia, private industries and the general public can use NASA's satellite data to address Earth science related issues. Worldview was open sourced in 2014. By shifting to an open source approach, the Worldview application has evolved to better serve end-users. Project developers are able to have discussions with end-users and community developers to understand issues and develop new features. New developers are able to track upcoming features, collaborate on them and make their own contributions. Getting new developers to contribute to the project has been one of the most important and difficult aspects of open sourcing Worldview. A focus has been made on making the installation of Worldview simple to reduce the initial learning curve and make contributing code easy. One way we have addressed this is through a simplified setup process. Our setup documentation includes a set of prerequisites and a set of straight forward commands to clone, configure, install and run. This presentation will emphasis our focus to simplify and standardize Worldview's open source code so more people are able to contribute. The more people who contribute, the better the application will become over time.
Ex Priori: Exposure-based Prioritization across Chemical Space
EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...
Photoelectric Effect: Back to Basics.
ERIC Educational Resources Information Center
Powell, R. A.
1978-01-01
Presents a simplified theoretical analysis of the variation of quantum yield with photon energy in the photoelectric experiment. Describes a way to amplify the experiment and make it more instructive to advanced students through the measurement of quantum yield of a photo cell. (GA)
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
ERIC Educational Resources Information Center
Finley-Brook, Mary; Zanella-Litke, Megan; Ragan, Kyle; Coleman, Breana
2012-01-01
Colleges across the country are hosting on-campus renewable energy projects. The general assumption is that trade schools, community colleges, or technology-oriented universities with large engineering departments make the most appropriate sites for training future leaders in renewable energy innovation. While it makes sense to take advantage of…
Static Analysis Alert Audits: Lexicon and Rules
2016-11-04
collaborators • Includes a standard set of well-defined determinations for static analysis alerts • Includes a set of auditing rules to help auditors make...consistent decisions in commonly-encountered situations Different auditors should make the same determination for a given alert! Improve the quality and...scenarios • Establish assumptions auditors can make • Overall: help make audit determinations more consistent We developed 12 rules • Drew on our own
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Fostering deliberations about health innovation: what do we want to know from publics?
Lehoux, Pascale; Daudelin, Genevieve; Demers-Payette, Olivier; Boivin, Antoine
2009-06-01
As more complex and uncertain forms of health innovation keep emerging, scholars are increasingly voicing arguments in favour of public involvement in health innovation policy. The current conceptualization of this involvement is, however, somewhat problematic as it tends to assume that scientific facts form a "hard," indisputable core around which "soft," relative values can be attached. This paper, by giving precedence to epistemological issues, explores what there is to know from public involvement. We argue that knowledge and normative assumptions are co-constitutive of each other and pivotal to the ways in which both experts and non-experts reason about health innovations. Because knowledge and normative assumptions are different but interrelated ways of reasoning, public involvement initiatives need to emphasise deliberative processes that maximise mutual learning within and across various groups of both experts and non-experts (who, we argue, all belong to the "publics"). Hence, we believe that what researchers might wish to know from publics is how their reasoning is anchored in normative assumptions (what makes a given innovation desirable?) and in knowledge about the plausibility of their effects (are they likely to be realised?). Accordingly, one sensible goal of greater public involvement in health innovation policy would be to refine normative assumptions and make their articulation with scientific observations explicit and openly contestable. The paper concludes that we must differentiate between normative assumptions and knowledge, rather than set up a dichotomy between them or confound them.
The capacity of people with a 'mental disability' to make a health care decision.
Wong, J G; Clare, C H; Holland, A J; Watson, P C; Gunn, M
2000-03-01
Based on the developing clinical and legal literature, and using the framework adopted in draft legislation, capacity to make a valid decision about a clinically required blood test was investigated in three groups of people with a 'mental disability' (i.e. mental illness (chronic schizophrenia), 'learning disability' ('mental retardation', or intellectual or developmental disability), or, dementia) and a fourth, comparison group. The three 'mental disability' groups (N = 20 in the 'learning disability' group, N = 21 in each of the other two groups) were recruited through the relevant local clinical services; and through a phlebotomy clinic for the 'general population' comparison group (N = 20). The decision-making task was progressively simplified by presenting the relevant information as separate elements and modifying the assessment of capacity so that responding became gradually less dependent on expressive verbal ability. Compared with the 'general population' group, capacity to make the particular decision was significantly more impaired in the 'learning disability' and 'dementia' groups. Importantly, however, it was not more impaired among the 'mental illness' group. All the groups benefited as the decision-making task was simplified, but at different stages. In each of the 'mental disability' groups, one participant benefited only when responding did not require any expensive verbal ability. Consistent with current views, capacity reflected an interaction between the decision-maker and the demands of the decision-making task. The findings have implications for the way in which decisions about health care interventions are sought from people with a 'mental disability'. The methodology may be extended to assess capacity to make other legally-significant decisions.
On the evolution of misunderstandings about evolutionary psychology.
Young, J; Persell, R
2000-04-01
Some of the controversy surrounding evolutionary explanations of human behavior may be due to cognitive information-processing patterns that are themselves the result of evolutionary processes. Two such patterns are (1) the tendency to oversimplify information so as to reduce demand on cognitive resources and (2) our strong desire to generate predictability and stability from perceptions of the external world. For example, research on social stereotyping has found that people tend to focus automatically on simplified social-categorical information, to use such information when deciding how to behave, and to rely on such information even in the face of contradictory evidence. Similarly, an undying debate over nature vs. nurture is shaped by various data-reduction strategies that frequently oversimplify, and thus distort, the intent of the supporting arguments. This debate is also often marked by an assumption that either the nature or the nurture domain may be justifiably excluded at an explanatory level because one domain appears to operate in a sufficiently stable and predictable way for a particular argument. As a result, critiques in-veighed against evolutionary explanations of behavior often incorporate simplified--and erroneous--assumptions about either the mechanics of how evolution operates or the inevitable implications of evolution for understanding human behavior. The influences of these tendencies are applied to a discussion of the heritability of behavioral characteristics. It is suggested that the common view that Mendelian genetics can explain the heritability of complex behaviors, with a one-gene-one-trait process, is misguided. Complex behaviors are undoubtedly a product of a more complex interaction between genes and environment, ensuring that both nature and nurture must be accommodated in a yet-to-be-developed post-Mendelian model of genetic influence. As a result, current public perceptions of evolutionary explanations of behavior are handicapped by the lack of clear articulation of the relationship between inherited genes and manifest behavior.
A new paradigm for clinical communication: critical review of literature in cancer care.
Salmon, Peter; Young, Bridget
2017-03-01
To: (i) identify key assumptions of the scientific 'paradigm' that shapes clinical communication research and education in cancer care; (ii) show that, as general rules, these do not match patients' own priorities for communication; and (iii) suggest how the paradigm might change to reflect evidence better and thereby serve patients better. A critical review, focusing on cancer care. We identified assumptions about patients' and clinicians' roles in recent position and policy statements. We examined these in light of research evidence, focusing on inductive research that has not itself been constrained by those assumptions, and considering the institutionalised interests that the assumptions might serve. The current paradigm constructs patients simultaneously as needy (requiring clinicians' explicit emotional support) and robust (seeking information and autonomy in decision making). Evidence indicates, however, that patients generally value clinicians who emphasise expert clinical care rather than counselling, and who lead decision making. In denoting communication as a technical skill, the paradigm constructs clinicians as technicians; however, communication cannot be reduced to technical skills, and teaching clinicians 'communication skills' has not clearly benefited patients. The current paradigm is therefore defined by assumptions that that have not arisen from evidence. A paradigm for clinical communication that makes its starting point the roles that mortal illness gives patients and clinicians would emphasise patients' vulnerability and clinicians' goal-directed expertise. Attachment theory provides a knowledge base to inform both research and education. Researchers will need to be alert to political interests that seek to mould patients into 'consumers', and to professional interests that seek to add explicit psychological dimensions to clinicians' roles. New approaches to education will be needed to support clinicians' curiosity and goal-directed judgement in applying this knowledge. The test for the new paradigm will be whether the research and education it promotes benefit patients. © 2016 The Authors. Medical Education published by Association for the Study of Medical Education and John Wiley & Sons Ltd.
You're Doing "What" This Summer? Making the Most of International Professional Development
ERIC Educational Resources Information Center
Patterson, Timothy
2014-01-01
The content of social studies curricula make studying abroad during the summer months a win-win for social studies teachers. During these experiences, teachers have the opportunity to develop their knowledge of global history and other cultures and to see a bit of the world. That said, the most dangerous assumption one can make is that simply…
2013-05-23
is called worldview. It determines how individuals interpret everything. In his book, Toward a Theory of Cultural Linguistics, Gary Palmer explains...person to person and organization to organization. Although analytical frameworks provide a common starting 2Gary B. Palmer, Toward A Theory of Cultural...this point, when overwhelmed, that planners reach out to theory and make determinations based on implicit assumptions and unconscious cognitive biases
Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.
2009-01-01
The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741
Pullenayegum, Eleanor M; Lim, Lily Sh
2016-12-01
When data are collected longitudinally, measurement times often vary among patients. This is of particular concern in clinic-based studies, for example retrospective chart reviews. Here, typically no two patients will share the same set of measurement times and moreover, it is likely that the timing of the measurements is associated with disease course; for example, patients may visit more often when unwell. While there are statistical methods that can help overcome the resulting bias, these make assumptions about the nature of the dependence between visit times and outcome processes, and the assumptions differ across methods. The purpose of this paper is to review the methods available with a particular focus on how the assumptions made line up with visit processes encountered in practice. Through this we show that no one method can handle all plausible visit scenarios and suggest that careful analysis of the visit process should inform the choice of analytic method for the outcomes. Moreover, there are some commonly encountered visit scenarios that are not handled well by any method, and we make recommendations with regard to study design that would minimize the chances of these problematic visit scenarios arising. © The Author(s) 2014.
Data-driven and hybrid coastal morphological prediction methods for mesoscale forecasting
NASA Astrophysics Data System (ADS)
Reeve, Dominic E.; Karunarathna, Harshinie; Pan, Shunqi; Horrillo-Caraballo, Jose M.; Różyński, Grzegorz; Ranasinghe, Roshanka
2016-03-01
It is now common for coastal planning to anticipate changes anywhere from 70 to 100 years into the future. The process models developed and used for scheme design or for large-scale oceanography are currently inadequate for this task. This has prompted the development of a plethora of alternative methods. Some, such as reduced complexity or hybrid models simplify the governing equations retaining processes that are considered to govern observed morphological behaviour. The computational cost of these models is low and they have proven effective in exploring morphodynamic trends and improving our understanding of mesoscale behaviour. One drawback is that there is no generally agreed set of principles on which to make the simplifying assumptions and predictions can vary considerably between models. An alternative approach is data-driven techniques that are based entirely on analysis and extrapolation of observations. Here, we discuss the application of some of the better known and emerging methods in this category to argue that with the increasing availability of observations from coastal monitoring programmes and the development of more sophisticated statistical analysis techniques data-driven models provide a valuable addition to the armoury of methods available for mesoscale prediction. The continuation of established monitoring programmes is paramount, and those that provide contemporaneous records of the driving forces and the shoreline response are the most valuable in this regard. In the second part of the paper we discuss some recent research that combining some of the hybrid techniques with data analysis methods in order to synthesise a more consistent means of predicting mesoscale coastal morphological evolution. While encouraging in certain applications a universally applicable approach has yet to be found. The route to linking different model types is highlighted as a major challenge and requires further research to establish its viability. We argue that key elements of a successful solution will need to account for dependencies between driving parameters, (such as wave height and tide level), and be able to predict step changes in the configuration of coastal systems.
Sit Up Straight! It's Good Physics
ERIC Educational Resources Information Center
Colicchia, Giuseppe
2005-01-01
A simplified model has been developed that shows forces and torques involved in maintaining static posture in the cervical spine. The model provides a biomechanical basis to estimate loadings on the cervical discs under various postures. Thus it makes a biological context for teaching statics.
48 CFR 46.202-2 - Government reliance on inspection by contractor.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACQUISITION REGULATION CONTRACT MANAGEMENT QUALITY ASSURANCE Contract Quality Requirements 46.202-2 Government... acquired at or below the simplified acquisition threshold conform to contract quality requirements before... the contractor's internal work processes. In making the determination, the contracting officer shall...
48 CFR 46.202-2 - Government reliance on inspection by contractor.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ACQUISITION REGULATION CONTRACT MANAGEMENT QUALITY ASSURANCE Contract Quality Requirements 46.202-2 Government... acquired at or below the simplified acquisition threshold conform to contract quality requirements before... the contractor's internal work processes. In making the determination, the contracting officer shall...
48 CFR 46.202-2 - Government reliance on inspection by contractor.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ACQUISITION REGULATION CONTRACT MANAGEMENT QUALITY ASSURANCE Contract Quality Requirements 46.202-2 Government... acquired at or below the simplified acquisition threshold conform to contract quality requirements before... the contractor's internal work processes. In making the determination, the contracting officer shall...
Causal Learning with Local Computations
ERIC Educational Resources Information Center
Fernbach, Philip M.; Sloman, Steven A.
2009-01-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…
Coveney, John; Herbert, Danielle L; Hill, Kathy; Mow, Karen E; Graves, Nicholas; Barnett, Adrian
2017-01-01
In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia. This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback. Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process. Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.
Mellers, B A; Schwartz, A; Cooke, A D
1998-01-01
For many decades, research in judgment and decision making has examined behavioral violations of rational choice theory. In that framework, rationality is expressed as a single correct decision shared by experimenters and subjects that satisfies internal coherence within a set of preferences and beliefs. Outside of psychology, social scientists are now debating the need to modify rational choice theory with behavioral assumptions. Within psychology, researchers are debating assumptions about errors for many different definitions of rationality. Alternative frameworks are being proposed. These frameworks view decisions as more reasonable and adaptive that previously thought. For example, "rule following." Rule following, which occurs when a rule or norm is applied to a situation, often minimizes effort and provides satisfying solutions that are "good enough," though not necessarily the best. When rules are ambiguous, people look for reasons to guide their decisions. They may also let their emotions take charge. This chapter presents recent research on judgment and decision making from traditional and alternative frameworks.
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings
LaFave, Daniel; Thomas, Duncan
2016-01-01
The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
Inferences about unobserved causes in human contingency learning.
Hagmayer, York; Waldmann, Michael R
2007-03-01
Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.
Rusch, Hannes
2014-01-01
Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. PMID:25253457
Direct numerical simulation of leaky dielectrics with application to electrohydrodynamic atomization
NASA Astrophysics Data System (ADS)
Owkes, Mark; Desjardins, Olivier
2013-11-01
Electrohydrodynamics (EHD) have the potential to greatly enhance liquid break-up, as demonstrated in numerical simulations by Van Poppel et al. (JCP (229) 2010). In liquid-gas EHD flows, the ratio of charge mobility to charge convection timescales can be used to determine whether the charge can be assumed to exist in the bulk of the liquid or at the surface only. However, for EHD-aided fuel injection applications, these timescales are of similar magnitude and charge mobility within the fluid might need to be accounted for explicitly. In this work, a computational approach for simulating two-phase EHD flows including the charge transport equation is presented. Under certain assumptions compatible with a leaky dielectric model, charge transport simplifies to a scalar transport equation that is only defined in the liquid phase, where electric charges are present. To ensure consistency with interfacial transport, the charge equation is solved using a semi-Lagrangian geometric transport approach, similar to the method proposed by Le Chenadec and Pitsch (JCP (233) 2013). This methodology is then applied to EHD atomization of a liquid kerosene jet, and compared to results produced under the assumption of a bulk volumetric charge.
Integrating Decision Making and Mental Health Interventions Research: Research Directions
Wills, Celia E.; Holmes-Rovner, Margaret
2006-01-01
The importance of incorporating patient and provider decision-making processes is in the forefront of the National Institute of Mental Health (NIMH) agenda for improving mental health interventions and services. Key concepts in patient decision making are highlighted within a simplified model of patient decision making that links patient-level/“micro” variables to services-level/“macro” variables via the decision-making process that is a target for interventions. The prospective agenda for incorporating decision-making concepts in mental health research includes (a) improved measures for characterizing decision-making processes that are matched to study populations, complexity, and types of decision making; (b) testing decision aids in effectiveness research for diverse populations and clinical settings; and (c) improving the understanding and incorporation of preference concepts in enhanced intervention designs. PMID:16724158
Learning in Equity-Oriented Scale-Making Projects
ERIC Educational Resources Information Center
Jurow, A. Susan; Shea, Molly
2015-01-01
This article examines how new forms of learning and expertise are made to become consequential in changing communities of practice. We build on notions of scale making to understand how particular relations between practices, technologies, and people become meaningful across spatial and temporal trajectories of social action. A key assumption of…
Toward an Understanding of Teachers' Desire for Participation in Decision Making.
ERIC Educational Resources Information Center
Taylor, Dianne L.; Tashakkori, Abbas
1997-01-01
Explores the assumption that teachers want to participate in schoolwide decision making by constructing a typology of teachers. Characterizes four types of teachers: empowered, disenfranchised, involved (those that do not want to participate, but do), and disengaged. Analysis of teachers' differences and similarities on demographic and attitudinal…
Robust Decision Making in a Nonlinear World
ERIC Educational Resources Information Center
Dougherty, Michael R.; Thomas, Rick P.
2012-01-01
The authors propose a general modeling framework called the general monotone model (GeMM), which allows one to model psychological phenomena that manifest as nonlinear relations in behavior data without the need for making (overly) precise assumptions about functional form. Using both simulated and real data, the authors illustrate that GeMM…
Cooking and Staff Development: A Blend of Training and Experience.
ERIC Educational Resources Information Center
Koll, Patricia; Anderson, Jim
1982-01-01
The making of a staff developer combines deliberate, systematic training and an accumulation of knowledge, skills, and assumptions based on experience. Staff developers must understand school practices and adult learning theory, shared decision-making and organization of support, and be flexible, creative, and committed to their work. (PP)
Information Input and Performance in Small Decision Making Groups.
ERIC Educational Resources Information Center
Ryland, Edwin Holman
It was hypothesized that increases in the amount and specificity of information furnished to a discussion group would facilitate group decision making and improve other aspects of group and individual performance. Procedures in testing these assumptions included varying the amounts of statistics, examples, testimony, and augmented information…
Strategies Making Language Features Noticeable in English Language Teaching
ERIC Educational Resources Information Center
Seong, Myeong-Hee
2009-01-01
The purpose of this study is to suggest effective strategies for the development of communicative ability in ELT (English Language Teaching) by investigating learners' perceptions on strategies making language features more noticeable. The assumption in the study is based on the idea of output-oriented focus on form instruction, supporting…
Collective Decision Making in Organizations.
ERIC Educational Resources Information Center
Svenning, Lynne L.
Based on the assumption that educators can adopt new patterns of organization and management to improve the quality of decision and change in education, this paper attempts to make decision theory and small group process theory relevant to practical decision situations confronting educational managers. Included are (1) a discussion of the…
A Unified Framework for Monetary Theory and Policy Analysis.
ERIC Educational Resources Information Center
Lagos, Ricardo; Wright, Randall
2005-01-01
Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…
A simplified rotor system mathematical model for piloted flight dynamics simulation
NASA Technical Reports Server (NTRS)
Chen, R. T. N.
1979-01-01
The model was developed for real-time pilot-in-the-loop investigation of helicopter flying qualities. The mathematical model included the tip-path plane dynamics and several primary rotor design parameters, such as flapping hinge restraint, flapping hinge offset, blade Lock number, and pitch-flap coupling. The model was used in several exploratory studies of the flying qualities of helicopters with a variety of rotor systems. The basic assumptions used and the major steps involved in the development of the set of equations listed are described. The equations consisted of the tip-path plane dynamic equation, the equations for the main rotor forces and moments, and the equation for control phasing required to achieve decoupling in pitch and roll due to cyclic inputs.
Research study on high energy radiation effect and environment solar cell degradation methods
NASA Technical Reports Server (NTRS)
Horne, W. E.; Wilkinson, M. C.
1974-01-01
The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.
A methodology to select a wire insulation for use in habitable spacecraft.
Paulos, T; Apostolakis, G
1998-08-01
This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.
A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents
NASA Technical Reports Server (NTRS)
Rosen, J. M.; Hofmann, D. J.
1977-01-01
A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.
Towards a theory of tiered testing.
Hansson, Sven Ove; Rudén, Christina
2007-06-01
Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.
A cross-diffusion system derived from a Fokker-Planck equation with partial averaging
NASA Astrophysics Data System (ADS)
Jüngel, Ansgar; Zamponi, Nicola
2017-02-01
A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.
Quantum vacuum interaction between two cosmic strings revisited
NASA Astrophysics Data System (ADS)
Muñoz-Castañeda, J. M.; Bordag, M.
2014-03-01
We reconsider the quantum vacuum interaction energy between two straight parallel cosmic strings. This problem was discussed several times in an approach treating both strings perturbatively and treating only one perturbatively. Here we point out that a simplifying assumption made by Bordag [Ann. Phys. (Berlin) 47, 93 (1990).] can be justified and show that, despite the global character of the background, the perturbative approach delivers a correct result. We consider the applicability of the scattering methods, developed in the past decade for the Casimir effect, for the cosmic string and find it not applicable. We calculate the scattering T-operator on one string. Finally, we consider the vacuum interaction of two strings when each carries a two-dimensional delta function potential.
Trends and Techniques for Space Base Electronics
NASA Technical Reports Server (NTRS)
Trotter, J. D.; Wade, T. E.; Gassaway, J. D.
1979-01-01
Simulations of various phosphorus and boron diffusions in SOS were completed and a sputtering system, furnaces, and photolithography related equipment were set up. Double layer metal experiments initially utilized wet chemistry techniques. By incorporating ultrasonic etching of the vias, premetal cleaning a modified buffered HF, phosphorus doped vapox, and extended sintering, yields of 98% were obtained using the standard test pattern. A two dimensional modeling program was written for simulating short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide silicon interface. Although the program is incomplete, the two dimensional Poisson equation for the potential distribution was achieved. The status of other Z-D MOSFET simulation programs is summarized.
Model-based estimation for dynamic cardiac studies using ECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.
1994-06-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less
Homogeneous-heterogeneous reactions in curved channel with porous medium
NASA Astrophysics Data System (ADS)
Hayat, T.; Ayub, Sadia; Alsaedi, A.
2018-06-01
Purpose of the present investigation is to examine the peristaltic flow through porous medium in a curved conduit. Problem is modeled for incompressible electrically conducting Ellis fluid. Influence of porous medium is tackled via modified Darcy's law. The considered model utilizes homogeneous-heterogeneous reactions with equal diffusivities for reactant and autocatalysis. Constitutive equations are formulated in the presence of viscous dissipation. Channel walls are compliant in nature. Governing equations are modeled and simplified under the assumptions of small Reynolds number and large wavelength. Graphical results for velocity, temperature, heat transfer coefficient and homogeneous-heterogeneous reaction parameters are examined for the emerging parameters entering into the problem. Results reveal an activation in both homogenous-heterogenous reaction effect and heat transfer rate with increasing curvature of the channel.