ERIC Educational Resources Information Center
Wagner, Richard K.; Herrera, Sarah K.; Spencer, Mercedes; Quinn, Jamie M.
2015-01-01
Recently, Tunmer and Chapman provided an alternative model of how decoding and listening comprehension affect reading comprehension that challenges the simple view of reading. They questioned the simple view's fundamental assumption that oral language comprehension and decoding make independent contributions to reading comprehension by arguing…
SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?
NASA Astrophysics Data System (ADS)
Rührmair, Ulrich
This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.
A Signal-Detection Analysis of Fast-and-Frugal Trees
ERIC Educational Resources Information Center
Luan, Shenghua; Schooler, Lael J.; Gigerenzer, Gerd
2011-01-01
Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called "small world") and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called "large world"). Few…
O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A
2017-10-01
Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rational Choice and the Framing of Decisions.
1986-05-29
decision under risk is the deriva- .- tion of the expected utility rule from simple principles of rational choice that make no . reference to long-run...corrective power of incentives depends on the nature of the particular error and cannot be taken for granted. The assumption of rationality of decision making ...easily eliminated by experience must be demonstrated. Finally, it is sometimes argued that failures of rationality in individual decision making are
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
PVWatts Version 1 Technical Reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2013-10-01
The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.
Shielding of medical imaging X-ray facilities: a simple and practical method.
Bibbo, Giovanni
2017-12-01
The most widely accepted method for shielding design of X-ray facilities is that contained in the National Council on Radiation Protection and Measurements Report 147 whereby the computation of the barrier thickness for primary, secondary and leakage radiations is based on the knowledge of the distances from the radiation sources, the assumptions of the clinical workload, and usage and occupancy of adjacent areas. The shielding methodology used in this report is complex. With this methodology, the shielding designers need to make assumptions regarding the use of the X-ray room and the adjoining areas. Different shielding designers may make different assumptions resulting in different shielding requirements for a particular X-ray room. A more simple and practical method is to base the shielding design on the shielding principle used to shield X-ray tube housing to limit the leakage radiation from the X-ray tube. In this case, the shielding requirements of the X-ray room would depend only on the maximum radiation output of the X-ray equipment regardless of workload, usage or occupancy of the adjacent areas of the room. This shielding methodology, which has been used in South Australia since 1985, has proven to be practical and, to my knowledge, has not led to excess shielding of X-ray installations.
Statistical Issues for Uncontrolled Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark
2008-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
A Simple Boyle's Law Experiment.
ERIC Educational Resources Information Center
Lewis, Don L.
1997-01-01
Describes an experiment to demonstrate Boyle's law that provides pressure measurements in a familiar unit (psi) and makes no assumptions concerning atmospheric pressure. Items needed include bathroom scales and a 60-ml syringe, castor oil, disposable 3-ml syringe and needle, modeling clay, pliers, and a wooden block. Commercial devices use a…
Testing hypotheses for differences between linear regression lines
Stanley J. Zarnoch
2009-01-01
Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...
Naïve Bayes classification in R.
Zhang, Zhongheng
2016-06-01
Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.
Latent mnemonic strengths are latent: a comment on Mickes, Wixted, and Wais (2007).
Rouder, Jeffrey N; Pratte, Michael S; Morey, Richard D
2010-06-01
Mickes, Wixted, and Wais (2007) proposed a simple test of latent strength variability in recognition memory. They asked participants to rate their confidence using either a 20-point or a 99-point strength scale and plotted distributions of the resulting ratings. They found 25% more variability in ratings for studied than for new items, which they interpreted as providing evidence that latent mnemonic strength distributions are 25% more variable for studied than for new items. We show here that this conclusion is critically dependent on assumptions--so much so that these assumptions determine the conclusions. In fact, opposite conclusions, such that study does not affect the variability of latent strength, may be reached by making different but equally plausible assumptions. Because all measurements of mnemonic strength variability are critically dependent on untestable assumptions, all are arbitrary. Hence, there is no principled method for assessing the relative variability of latent mnemonic strength distributions.
A Simple Method for Automated Equilibration Detection in Molecular Simulations.
Chodera, John D
2016-04-12
Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure and demonstrate its utility on typical molecular simulation data.
A simple method for automated equilibration detection in molecular simulations
Chodera, John D.
2016-01-01
Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest, in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure, and demonstrate its utility on typical molecular simulation data. PMID:26771390
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
Heuristics and Biases in Military Decision Making
2010-10-01
rationality and is based on a linear, step-based model that generates a specific course of action and is useful for the examination of problems that...exhibit stability and are underpinned by assumptions of “technical- rationality .”5 The Army values MDMP as the sanctioned approach for solving...theory) which sought to describe human behavior as a rational maximization of cost-benefit decisions, Kahne- man and Tversky provided a simple
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Statistical issues in the design and planning of proteomic profiling experiments.
Cairns, David A
2015-01-01
The statistical design of a clinical proteomics experiment is a critical part of well-undertaken investigation. Standard concepts from experimental design such as randomization, replication and blocking should be applied in all experiments, and this is possible when the experimental conditions are well understood by the investigator. The large number of proteins simultaneously considered in proteomic discovery experiments means that determining the number of required replicates to perform a powerful experiment is more complicated than in simple experiments. However, by using information about the nature of an experiment and making simple assumptions this is achievable for a variety of experiments useful for biomarker discovery and initial validation.
NASA Technical Reports Server (NTRS)
Matney, Mark
2011-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.
New paradoxes of risky decision making.
Birnbaum, Michael H
2008-04-01
During the last 25 years, prospect theory and its successor, cumulative prospect theory, replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The new findings are consistent with and, in several cases, were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches. Although they have some similarities to later models called "rank-dependent utility," configural weight models do not satisfy coalescing, the assumption that branches leading to the same consequence can be combined by adding their probabilities. Nor do they satisfy cancellation, the "independence" assumption that branches common to both alternatives can be removed. The transfer of attention exchange model, with parameters estimated from previous data, correctly predicts results with all 11 new paradoxes. Apparently, people do not frame choices as prospects but, instead, as trees with branches.
NASA Technical Reports Server (NTRS)
Peterson, D.
1979-01-01
Rod-beam theories are founded on hypotheses such as Bernouilli's suggesting flat cross-sections under deformation. These assumptions, which make rod-beam theories possible, also limit the accuracy of their analysis. It is shown that from a certain order upward terms of geometrically nonlinear deformations contradict the rod-beam hypotheses. Consistent application of differential geometry calculus also reveals differences from existing rod theories of higher order. These differences are explained by simple examples.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Cocirculation of infectious diseases on networks
NASA Astrophysics Data System (ADS)
Miller, Joel C.
2013-06-01
We consider multiple diseases spreading in a static configuration model network. We make standard assumptions that infection transmits from neighbor to neighbor at a disease-specific rate and infected individuals recover at a disease-specific rate. Infection by one disease confers immediate and permanent immunity to infection by any disease. Under these assumptions, we find a simple, low-dimensional ordinary differential equations model which captures the global dynamics of the infection. The dynamics depend strongly on initial conditions. Although we motivate this Rapid Communication with infectious disease, the model may be adapted to the spread of other infectious agents such as competing political beliefs, or adoption of new technologies if these are influenced by contacts. As an example, we demonstrate how to model an infectious disease which can be prevented by a behavior change.
Fitting and Reconstruction of Thirteen Simple Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Al-Haddad, Nada; Nieves-Chinchilla, Teresa; Savani, Neel P.; Lugaz, Noé; Roussev, Ilia I.
2018-05-01
Coronal mass ejections (CMEs) are the main drivers of geomagnetic disturbances, but the effects of their interaction with Earth's magnetic field depend on their magnetic configuration and orientation. Fitting and reconstruction techniques have been developed to determine important geometrical and physical CME properties, such as the orientation of the CME axis, the CME size, and its magnetic flux. In many instances, there is disagreement between different methods but also between fitting from in situ measurements and reconstruction based on remote imaging. This could be due to the geometrical or physical assumptions of the models, but also to the fact that the magnetic field inside CMEs is only measured at one point in space as the CME passes over a spacecraft. In this article we compare three methods that are based on different assumptions for measurements by the Wind spacecraft for 13 CMEs from 1997 to 2015. These CMEs are selected from the interplanetary coronal mass ejections catalog on
Linking assumptions in amblyopia
LEVI, DENNIS M.
2017-01-01
Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956
Fun with maths: exploring implications of mathematical models for malaria eradication.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A
2014-12-11
Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.
Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters
Wozniak, Christopher E.; Hughes, Kelly T.
2008-01-01
Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950
The Growth of the Japanese Economy: Challenges to American National Security
1991-09-01
the flexibility to make decisions and the ability to fend for oneself , which are indispensable parts of anN country’s national security---would indeed...Europe, but American industry in Europe.ඪ As far as Japan was concerned, Servan-Schreiber admitted that it "...will manage to keep up to the... managing its economic machine in a manner that the United States found more acceptable. It is a simple assumption, yet it has been the cornerstone of the
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
A lack of appetite for information and computation. Simple heuristics in food choice.
Schulte-Mecklenbeck, Michael; Sohn, Matthias; de Bellis, Emanuel; Martin, Nathalie; Hertwig, Ralph
2013-12-01
The predominant, but largely untested, assumption in research on food choice is that people obey the classic commandments of rational behavior: they carefully look up every piece of relevant information, weight each piece according to subjective importance, and then combine them into a judgment or choice. In real world situations, however, the available time, motivation, and computational resources may simply not suffice to keep these commandments. Indeed, there is a large body of research suggesting that human choice is often better accommodated by heuristics-simple rules that enable decision making on the basis of a few, but important, pieces of information. We investigated the prevalence of such heuristics in a computerized experiment that engaged participants in a series of choices between two lunch dishes. Employing MouselabWeb, a process-tracing technique, we found that simple heuristics described an overwhelmingly large proportion of choices, whereas strategies traditionally deemed rational were barely apparent in our data. Replicating previous findings, we also observed that visual stimulus segments received a much larger proportion of attention than any nutritional values did. Our results suggest that, consistent with human behavior in other domains, people make their food choices on the basis of simple and informationally frugal heuristics. Copyright © 2013 Elsevier Ltd. All rights reserved.
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
American College of Emergency Physicians Ethics Manual.
1991-10-01
Ethical concerns are a major part of the clinical practice of emergency medicine. The emergency physician must make hard choices, not only with regard to the scientific/technical aspects but also with regard to the moral aspects of caring for emergency patients. By the nature of the specialty, emergency physicians face ethical dilemmas often requiring prompt decisions with limited information. This manual identifies important moral principles and values in emergency medicine. The underlying assumption is that a knowledge of moral principles and ethical values helps the emergency physician make responsible moral choices. Neither the scientific nor the moral aspects of clinical decision making can be reduced to simple formulas. Nevertheless, decisions must be made. Emergency physicians should, therefore, be cognizant of the ethical principles that are important for emergency medicine, understand the process of ethical reasoning, and be capable of making rational moral decisions based on a stable framework of values.
Ordering of the nanoscale step morphology as a mechanism for droplet self-propulsion.
Hilner, Emelie; Zakharov, Alexei A; Schulte, Karina; Kratzer, Peter; Andersen, Jesper N; Lundgren, Edvin; Mikkelsen, Anders
2009-07-01
We establish a new mechanism for self-propelled motion of droplets, in which ordering of the nanoscale step morphology by sublimation beneath the droplets themselves acts to drive them perpendicular and up the surface steps. The mechanism is demonstrated and explored for Ga droplets on GaP(111)B, using several experimental techniques allowing studies of the structure and dynamics from micrometers to the atomic scale. We argue that the simple assumptions underlying the propulsion mechanism make it relevant for a wide variety of materials systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2014-09-01
The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.
Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W
2018-01-01
Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994
Zipf's word frequency law in natural language: a critical review and future directions.
Piantadosi, Steven T
2014-10-01
The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.
NASA Astrophysics Data System (ADS)
Akashi, Haruaki; Sasaki, K.; Yoshinaga, T.
2011-10-01
Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found that the simulated emission from 2nd PBS agrees with the experimental result. Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found that the simulated emission from 2nd PBS agrees with the experimental result. This work was supported by KAKENHI (22340170).
The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth
NASA Astrophysics Data System (ADS)
Zentner, Andrew R.
I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.
Peptides at Membrane Surfaces and their Role in the Origin of Life
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, Michael A.; DeVincenzi, D. (Technical Monitor)
2002-01-01
All ancestors of contemporary cells (protocells) had to transport ions and organic matter across membranous walls, capture and utilize energy and transduce environmental signals. In modern organisms, all these functions are preformed by membrane proteins. We make the parsimonious assumption that in the protobiological milieu the same functions were carried out by their simple analogs - peptides. This, however, required that simple peptides could self-organize into ordered, functional structures. In a series of detailed, molecular-level computer simulations we demonstrated how this is possible. One example is the peptide (LSLLLSL)3 which forms a trameric bundle capable of transporting protons across membranes. Another example is the transmembrane pore of the influenza M2 protein. This aggregate of four identical alpha-helices, each built of 25 amino acids, forms an efficient and selective voltage-gated proton channel. Our simulations explain the gating mechanism in this channel. The channel can be re-engineered into a simple proton pump.
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Connections between survey calibration estimators and semiparametric models for incomplete data
Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.
2012-01-01
Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390
ERIC Educational Resources Information Center
Stapleton, Lee M.; Garrod, Guy D.
2007-01-01
Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…
Mathematical Modeling: Are Prior Experiences Important?
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…
20 CFR 404.1690 - Assumption when we make a finding of substantial failure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 404.1690 Section 404.1690 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD... responsibility for performing the disability determination function from the State agency, whether the assumption...
20 CFR 416.1090 - Assumption when we make a finding of substantial failure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 416.1090 Section 416.1090 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL... responsibility for performing the disability determination function from the State agency, whether the assumption...
Optimal weighting in fNL constraints from large scale structure in an idealised case
NASA Astrophysics Data System (ADS)
Slosar, Anže
2009-03-01
We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.
Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics
NASA Astrophysics Data System (ADS)
García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team
2016-06-01
We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey
Zipf’s word frequency law in natural language: A critical review and future directions
2014-01-01
The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880
Blanton, Hart; Jaccard, James
2006-01-01
Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
NASA Astrophysics Data System (ADS)
Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus
2017-07-01
Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
NASA Astrophysics Data System (ADS)
Cottrell, William; Montero, Miguel
2018-02-01
In this note we investigate the role of Lloyd's computational bound in holographic complexity. Our goal is to translate the assumptions behind Lloyd's proof into the bulk language. In particular, we discuss the distinction between orthogonalizing and `simple' gates and argue that these notions are useful for diagnosing holographic complexity. We show that large black holes constructed from series circuits necessarily employ simple gates, and thus do not satisfy Lloyd's assumptions. We also estimate the degree of parallel processing required in this case for elementary gates to orthogonalize. Finally, we show that for small black holes at fixed chemical potential, the orthogonalization condition is satisfied near the phase transition, supporting a possible argument for the Weak Gravity Conjecture first advocated in [1].
Project Air Force, Annual Report 2003
2003-01-01
to Simulate Personnel Retention The CAPM system is based on a simple assumption about employee retention: A rational individual faced with the...analysis to certain parts of the force. CAPM keeps a complete record of the assumptions , policies, and data used for each scenario. Thus decisionmakers...premises and assumptions . Instead, the Commission concluded that space is a separate oper- ating arena equivalent to the air, land, and maritime
Magisterial Decision-Making: How Fifteen Stipendiary Magistrates Make Court-Room Decisions.
ERIC Educational Resources Information Center
Lawrence, Jeanette A.; Browne, Myra A.
This report describes the cognitive procedures which a group of Australian stipendiary utilize in court to make decisions. The study was based on an assumption that magistrates represent a group of professionals whose work involves making decisions of human significance, and on an assumption that the magistrates' own perceptions of their ways of…
Assumption-versus data-based approaches to summarizing species' ranges.
Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro
2018-06-01
For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
Estimating trends in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.
2017-06-01
Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.
No Generalization of Practice for Nonzero Simple Addition
ERIC Educational Resources Information Center
Campbell, Jamie I. D.; Beech, Leah C.
2014-01-01
Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…
The impact of management science on political decision making
NASA Technical Reports Server (NTRS)
White, M. J.
1971-01-01
The possible impact on public policy and organizational decision making of operations research/management science (OR/MS) is discussed. Criticisms based on the assumption that OR/MS will have influence on decision making and criticisms based on the assumption that it will have no influence are described. New directions in the analysis of analysis and in thinking about policy making are also considered.
Survival analysis in hematologic malignancies: recommendations for clinicians
Delgado, Julio; Pereira, Arturo; Villamor, Neus; López-Guillermo, Armando; Rozman, Ciril
2014-01-01
The widespread availability of statistical packages has undoubtedly helped hematologists worldwide in the analysis of their data, but has also led to the inappropriate use of statistical methods. In this article, we review some basic concepts of survival analysis and also make recommendations about how and when to perform each particular test using SPSS, Stata and R. In particular, we describe a simple way of defining cut-off points for continuous variables and the appropriate and inappropriate uses of the Kaplan-Meier method and Cox proportional hazard regression models. We also provide practical advice on how to check the proportional hazards assumption and briefly review the role of relative survival and multiple imputation. PMID:25176982
The Measurement of the Field of View from Airplane Cockpits
NASA Technical Reports Server (NTRS)
Gough, Melvin N
1936-01-01
A method has been devised for the angular measurement and graphic portrayal of the view obtained from the pilot's cockpit of an airplane. The assumption upon which the method is based and a description of the instrument, designated a "visiometer", used in the measurement are given. Account is taken of the fact that the pilot has two eyes and two separate sources of vision. The view is represented on charts using an equal-area polar projection, a description and proof of which are given. The use of this chart, aside from its simplicity, may make possible the establishment of simple criterions of the field of view. Charts of five representative airplanes with various cockpit arrangements are included.
Robust matching for voice recognition
NASA Astrophysics Data System (ADS)
Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.
1994-10-01
This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.
Tukker, Arnold; de Koning, Arjan; Wood, Richard; Moll, Stephan; Bouwmeester, Maaike C
2013-02-19
Environmentally extended input output (EE IO) analysis is increasingly used to assess the carbon footprint of final consumption. Official EE IO data are, however, at best available for single countries or regions such as the EU27. This causes problems in assessing pollution embodied in imported products. The popular "domestic technology assumption (DTA)" leads to errors. Improved approaches based on Life Cycle Inventory data, Multiregional EE IO tables, etc. rely on unofficial research data and modeling, making them difficult to implement by statistical offices. The DTA can lead to errors for three main reasons: exporting countries can have higher impact intensities; may use more intermediate inputs for the same output; or may sell the imported products for lower/other prices than those produced domestically. The last factor is relevant for sustainable consumption policies of importing countries, whereas the first factors are mainly a matter of making production in exporting countries more eco-efficient. We elaborated a simple correction for price differences in imports and domestic production using monetary and physical data from official import and export statistics. A case study for the EU27 shows that this "price-adjusted DTA" gives a partial but meaningful adjustment of pollution embodied in trade compared to multiregional EE IO studies.
Quantum-like dynamics applied to cognition: a consideration of available options
NASA Astrophysics Data System (ADS)
Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.
2017-10-01
Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.
A simple and effective treatment for stuttering: speech practice without audience.
Yamada, Jun; Homma, Takanobu
2007-01-01
On the assumption that stuttering is essentially acquired behavior, it has been concluded that speech-related anticipatory anxiety as a major cause of stuttering accounts for virtually all apparently-different aspects of stuttering on the behavioral level. Stutterers' linguistic competence is unimpaired, although their speech production is characterized as "disfluent". Yet, such disfluency is dramatically reduced when such people speak in anxiety-free no-audience conditions. Furthermore, our pilot study of oral reading in Japanese indicates that a stutterer can easily replace stuttering events with a common interjection, "eh", and make oral reading sound natural and fluent. Given these facts, we propose the Overlearning Fluency when Alone (OFA) treatment, consisting of two distinct but overlapping steps: (1) Overlearning of fluency in a no-audience condition, and (2) Use of an interjection, "eh", as a starter when a stuttering event is anticipated. It remains to be demonstrated that this is a truly simple and effective treatment for "one of mankind's most baffling afflictions".
Quantification of sensory and food quality: the R-index analysis.
Lee, Hye-Seong; van Hout, Danielle
2009-08-01
The accurate quantification of sensory difference/similarity between foods, as well as consumer acceptance/preference and concepts, is greatly needed to optimize and maintain food quality. The R-Index is one class of measures of the degree of difference/similarity, and was originally developed for sensory difference tests for food quality control, product development, and so on. The index is based on signal detection theory and is free of the response bias that can invalidate difference testing protocols, including categorization and same-different and A-Not A tests. It is also a nonparametric analysis, making no assumptions about sensory distributions, and is simple to compute and understand. The R-Index is also flexible in its application. Methods based on R-Index analysis have been used as detection and sensory difference tests, as simple alternatives to hedonic scaling, and for the measurement of consumer concepts. This review indicates the various computational strategies for the R-Index and its practical applications to consumer and sensory measurements in food science.
2011-01-01
Background Data fusion methods are widely used in virtual screening, and make the implicit assumption that the more often a molecule is retrieved in multiple similarity searches, the more likely it is to be active. This paper tests the correctness of this assumption. Results Sets of 25 searches using either the same reference structure and 25 different similarity measures (similarity fusion) or 25 different reference structures and the same similarity measure (group fusion) show that large numbers of unique molecules are retrieved by just a single search, but that the numbers of unique molecules decrease very rapidly as more searches are considered. This rapid decrease is accompanied by a rapid increase in the fraction of those retrieved molecules that are active. There is an approximately log-log relationship between the numbers of different molecules retrieved and the number of searches carried out, and a rationale for this power-law behaviour is provided. Conclusions Using multiple searches provides a simple way of increasing the precision of a similarity search, and thus provides a justification for the use of data fusion methods in virtual screening. PMID:21824430
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Missing CD4+ cell response in randomized clinical trials of maraviroc and dolutegravir.
Cuffe, Robert; Barnett, Carly; Granier, Catherine; Machida, Mitsuaki; Wang, Cunshan; Roger, James
2015-10-01
Missing data can compromise inferences from clinical trials, yet the topic has received little attention in the clinical trial community. Shortcomings in commonly used methods used to analyze studies with missing data (complete case, last- or baseline-observation carried forward) have been highlighted in a recent Food and Drug Administration-sponsored report. This report recommends how to mitigate the issues associated with missing data. We present an example of the proposed concepts using data from recent clinical trials. CD4+ cell count data from the previously reported SINGLE and MOTIVATE studies of dolutegravir and maraviroc were analyzed using a variety of statistical methods to explore the impact of missing data. Four methodologies were used: complete case analysis, simple imputation, mixed models for repeated measures, and multiple imputation. We compared the sensitivity of conclusions to the volume of missing data and to the assumptions underpinning each method. Rates of missing data were greater in the MOTIVATE studies (35%-68% premature withdrawal) than in SINGLE (12%-20%). The sensitivity of results to assumptions about missing data was related to volume of missing data. Estimates of treatment differences by various analysis methods ranged across a 61 cells/mm3 window in MOTIVATE and a 22 cells/mm3 window in SINGLE. Where missing data are anticipated, analyses require robust statistical and clinical debate of the necessary but unverifiable underlying statistical assumptions. Multiple imputation makes these assumptions transparent, can accommodate a broad range of scenarios, and is a natural analysis for clinical trials in HIV with missing data.
Turning great strategy into great performance.
Mankins, Michael C; Steele, Richard
2005-01-01
Despite the enormous time and energy that goes into strategy development, many companies have little to show for their efforts. Indeed, research by the consultancy Marakon Associates suggests that companies on average deliver only 63% of the financial performance their strategies promise. In this article, Michael Mankins and Richard Steele of Marakon present the findings of this research. They draw on their experience with high-performing companies like Barclays, Cisco, Dow Chemical, 3M, and Roche to establish some basic rules for setting and delivering strategy: Keep it simple, make it concrete. Avoid long, drawn-out descriptions of lofty goals and instead stick to clear language describing what your company will and won't do. Debate assumptions, not forecasts. Create cross-functional teams drawn from strategy, marketing, and finance to ensure the assumptions underlying your long-term plans reflect both the real economics of your company's markets and its actual performance relative to competitors. Use a rigorous analytic framework. Ensure that the dialogue between the corporate center and the business units about market trends and assumptions is conducted within a rigorous framework, such as that of "profit pools". Discuss resource deployments early. Create more realistic forecasts and more executable plans by discussing up front the level and timing of critical deployments. Clearly identify priorities. Prioritize tactics so that employees have a clear sense of where to direct their efforts. Continuously monitor performance. Track resource deployment and results against plan, using continuous feedback to reset assumptions and reallocate resources. Reward and develop execution capabilities. Motivate and develop staff. Following these rules strictly can help narrow the strategy-to-performance gap.
Pendulum Motion and Differential Equations
ERIC Educational Resources Information Center
Reid, Thomas F.; King, Stephen C.
2009-01-01
A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…
Evaluation of 2D shallow-water model for spillway flow with a complex geometry
USDA-ARS?s Scientific Manuscript database
Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...
When life imitates art: surrogate decision making at the end of life.
Shapiro, Susan P
2007-01-01
The privileging of the substituted judgment standard as the gold standard for surrogate decision making in law and bioethics has constrained the research agenda in end-of-life decision making. The empirical literature is inundated with a plethora of "Newlywed Game" designs, in which potential patients and potential surrogates respond to hypothetical scenarios to see how often they "get it right." The preoccupation with determining the capacity of surrogates to accurately reproduce the judgments of another makes a number of assumptions that blind scholars to the variables central to understanding how surrogates actually make medical decisions on behalf of another. These assumptions include that patient preferences are knowable, surrogates have adequate and accurate information, time stands still, patients get the surrogates they want, patients want and surrogates utilize substituted judgment criteria, and surrogates are disinterested. This article examines these assumptions and considers the challenges of designing research that makes them problematic.
Steady states and stability in metabolic networks without regulation.
Ivanov, Oleksandr; van der Schaft, Arjan; Weissing, Franz J
2016-07-21
Metabolic networks are often extremely complex. Despite intensive efforts many details of these networks, e.g., exact kinetic rates and parameters of metabolic reactions, are not known, making it difficult to derive their properties. Considerable effort has been made to develop theory about properties of steady states in metabolic networks that are valid for any values of parameters. General results on uniqueness of steady states and their stability have been derived with specific assumptions on reaction kinetics, stoichiometry and network topology. For example, deep results have been obtained under the assumptions of mass-action reaction kinetics, continuous flow stirred tank reactors (CFSTR), concordant reaction networks and others. Nevertheless, a general theory about properties of steady states in metabolic networks is still missing. Here we make a step further in the quest for such a theory. Specifically, we study properties of steady states in metabolic networks with monotonic kinetics in relation to their stoichiometry (simple and general) and the number of metabolites participating in every reaction (single or many). Our approach is based on the investigation of properties of the Jacobian matrix. We show that stoichiometry, network topology, and the number of metabolites that participate in every reaction have a large influence on the number of steady states and their stability in metabolic networks. Specifically, metabolic networks with single-substrate-single-product reactions have disconnected steady states, whereas in metabolic networks with multiple-substrates-multiple-product reactions manifolds of steady states arise. Metabolic networks with simple stoichiometry have either a unique globally asymptotically stable steady state or asymptotically stable manifolds of steady states. In metabolic networks with general stoichiometry the steady states are not always stable and we provide conditions for their stability. In order to demonstrate the biological relevance we illustrate the results on the examples of the TCA cycle, the mevalonate pathway and the Calvin cycle. Copyright © 2016 Elsevier Ltd. All rights reserved.
Philosophy of Technology Assumptions in Educational Technology Leadership
ERIC Educational Resources Information Center
Webster, Mark David
2017-01-01
A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…
Sampling Assumptions in Inductive Generalization
ERIC Educational Resources Information Center
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
24 CFR 58.4 - Assumption authority.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate that...
NASA Astrophysics Data System (ADS)
Giuliani, M.; Pianosi, F.; Castelletti, A.
2015-11-01
Advances in Environmental monitoring systems are making a wide range of data available at increasingly higher temporal and spatial resolution. This creates an opportunity to enhance real-time understanding of water systems conditions and to improve prediction of their future evolution, ultimately increasing our ability to make better decisions. Yet, many water systems are still operated using very simple information systems, typically based on simple statistical analysis and the operator's experience. In this work, we propose a framework to automatically select the most valuable information to inform water systems operations supported by quantitative metrics to operationally and economically assess the value of this information. The Hoa Binh reservoir in Vietnam is used to demonstrate the proposed framework in a multiobjective context, accounting for hydropower production and flood control. First, we quantify the expected value of perfect information, meaning the potential space for improvement under the assumption of exact knowledge of the future system conditions. Second, we automatically select the most valuable information that could be actually used to improve the Hoa Binh operations. Finally, we assess the economic value of sample information on the basis of the resulting policy performance. Results show that our framework successfully select information to enhance the performance of the operating policies with respect to both the competing objectives, attaining a 40% improvement close to the target trade-off selected as potentially good compromise between hydropower production and flood control.
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark; Bacon, John
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.
Morse Code, Scrabble, and the Alphabet
ERIC Educational Resources Information Center
Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss
2004-01-01
In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…
Evidence synthesis for decision making 7: a reviewer's checklist.
Ades, A E; Caldwell, Deborah M; Reken, Stefanie; Welton, Nicky J; Sutton, Alex J; Dias, Sofia
2013-07-01
This checklist is for the review of evidence syntheses for treatment efficacy used in decision making based on either efficacy or cost-effectiveness. It is intended to be used for pairwise meta-analysis, indirect comparisons, and network meta-analysis, without distinction. It does not generate a quality rating and is not prescriptive. Instead, it focuses on a series of questions aimed at revealing the assumptions that the authors of the synthesis are expecting readers to accept, the adequacy of the arguments authors advance in support of their position, and the need for further analyses or sensitivity analyses. The checklist is intended primarily for those who review evidence syntheses, including indirect comparisons and network meta-analyses, in the context of decision making but will also be of value to those submitting syntheses for review, whether to decision-making bodies or journals. The checklist has 4 main headings: A) definition of the decision problem, B) methods of analysis and presentation of results, C) issues specific to network synthesis, and D) embedding the synthesis in a probabilistic cost-effectiveness model. The headings and implicit advice follow directly from the other tutorials in this series. A simple table is provided that could serve as a pro forma checklist.
ERIC Educational Resources Information Center
Shockley-Zalabak, Pamela
A study of decision making processes and communication rules, in a corporate setting undergoing change as a result of organizational ineffectiveness, examined whether (1) decisions about formal communication reporting systems were linked to management assumptions about technical creativity/effectiveness, (2) assumptions about…
Making Predictions about Chemical Reactivity: Assumptions and Heuristics
ERIC Educational Resources Information Center
Maeyer, Jenine; Talanquer, Vicente
2013-01-01
Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…
A simple method for predicting solar fractions of IPH and space heating systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauhan, R.; Goodling, J.S.
1982-01-01
In this paper, a method has been developed to evaluate the solar fractions of liquid based industrial process heat (IPH) and space heating systems, without the use of computer simulations. The new method is the result of joining two theories, Lunde's equation to determine monthly performance of solar heating systems and the utilizability correlations of Collares-Pereira and Rabl by making appropriate assumptions. The new method requires the input of the monthly averages of the utilizable radiation and the collector operating time. These quantities are determined conveniently by the method of Collares-Pereira and Rabl. A comparison of the results of themore » new method with the most acceptable design methods shows excellent agreement.« less
Mean-Field-Game Model for Botnet Defense in Cyber-Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolokoltsov, V. N., E-mail: v.kolokoltsov@warwick.ac.uk; Bensoussan, A.
We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customersmore » (say, switch on or out the defence system) is much faster that the infection rates.« less
Accelerator skyshine: Tyger, tyger, burning bright
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapleton, G.B.; O`Brien, K.; Thomas, R.H.
1992-06-01
Neutron skyshine is, in most cases, the dominant source of radiation exposure to the general public from operation of well-shielded, high-energy accelerators. To estimate this exposure, tabulated solutions of the transport of neutrons through the air are frequently used. In previous works on skyshine, these tabular data have been parameterized into simple empirical equations that are easy and fast to use but are limited to distances greater than a few hundred meters from the accelerator. Our current report has refined this earlier work by including more realistic assumptions of neutron differential energy spectrum and angular distribution. These improved calculations essentiallymore » endorse the earlier parameterizations but make possible reasonably accurate dose estimates much closer to the skyshine source than before.« less
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Experimental Control of Simple Pendulum Model
ERIC Educational Resources Information Center
Medina, C.
2004-01-01
This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…
ERIC Educational Resources Information Center
Kruger-Ross, Matthew J.; Holcomb, Lori B.
2012-01-01
The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…
The Productivity Dilemma in Workplace Health Promotion.
Cherniack, Martin
2015-01-01
Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.
The Effect of Expected Value on Attraction Effect Preference Reversals
Warren, Paul A.; El‐Deredy, Wael; Howes, Andrew
2016-01-01
Abstract The attraction effect shows that adding a third alternative to a choice set can alter preference between the original two options. For over 30 years, this simple demonstration of context dependence has been taken as strong evidence against a class of parsimonious value‐maximising models that evaluate alternatives independently from one another. Significantly, however, in previous demonstrations of the attraction effect alternatives are approximately equally valuable, so there was little consequence to the decision maker irrespective of which alternative was selected. Here we vary the difference in expected value between alternatives and provide the first demonstration that, although extinguished with large differences, this theoretically important effect persists when choice between alternatives has a consequence. We use this result to clarify the implications of the attraction effect, arguing that although it robustly violates the assumptions of value‐maximising models, it does not eliminate the possibility that human decision making is optimal. © 2016 The Authors Journal of Behavioral Decision Making Published by John Wiley & Sons Ltd. PMID:29081595
The Effect of Expected Value on Attraction Effect Preference Reversals.
Farmer, George D; Warren, Paul A; El-Deredy, Wael; Howes, Andrew
2017-10-01
The attraction effect shows that adding a third alternative to a choice set can alter preference between the original two options. For over 30 years, this simple demonstration of context dependence has been taken as strong evidence against a class of parsimonious value-maximising models that evaluate alternatives independently from one another. Significantly, however, in previous demonstrations of the attraction effect alternatives are approximately equally valuable, so there was little consequence to the decision maker irrespective of which alternative was selected. Here we vary the difference in expected value between alternatives and provide the first demonstration that, although extinguished with large differences, this theoretically important effect persists when choice between alternatives has a consequence. We use this result to clarify the implications of the attraction effect, arguing that although it robustly violates the assumptions of value-maximising models, it does not eliminate the possibility that human decision making is optimal. © 2016 The Authors Journal of Behavioral Decision Making Published by John Wiley & Sons Ltd.
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.
2015-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.
Why is it Doing That? - Assumptions about the FMS
NASA Technical Reports Server (NTRS)
Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)
1998-01-01
In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.
Local Structure Theory for Cellular Automata.
NASA Astrophysics Data System (ADS)
Gutowitz, Howard Andrew
The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
NASA Astrophysics Data System (ADS)
Ristenpart, Thomas; Yilek, Scott
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
Accountability Policies and Teacher Decision Making: Barriers to the Use of Data to Improve Practice
ERIC Educational Resources Information Center
Ingram, Debra; Louis, Karen Seashore; Schroeder, Roger G.
2004-01-01
One assumption underlying accountability policies is that results from standardized tests and other sources will be used to make decisions about school and classroom practice. We explore this assumption using data from a longitudinal study of nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. We use the…
Speckle interferometry of asteroids
NASA Technical Reports Server (NTRS)
Drummond, Jack
1988-01-01
By studying the image two-dimensional power spectra or autocorrelations projected by an asteroid as it rotates, it is possible to locate its rotational pole and derive its three axes dimensions through speckle interferometry under certain assumptions of uniform, geometric scattering, and triaxial ellipsoid shape. However, in cases where images can be reconstructed, the need for making the assumptions is obviated. Furthermore, the ultimate goal for speckle interferometry of image reconstruction will lead to mapping albedo features (if they exist) as impact areas or geological units. The first glimpses of the surface of an asteroid were obtained from images of 4 Vesta reconstructed from speckle interferometric observations. These images reveal that Vesta is quite Moon-like in having large hemispheric-scale albedo features. All of its lightcurves can be produced from a simple model developed from the images. Although undoubtedly more intricate than the model, Vesta's lightcurves can be matched by a model with three dark and four bright spots. The dark areas so dominate one hemisphere that a lightcurve minimum occurs when the maximum cross-section area is visible. The triaxial ellipsoid shape derived for Vesta is not consistent with the notion that the asteroid has an equilibrium shape in spite of its having apparently been differentiated.
Testing the Relation between the Local and Cosmic Star Formation Histories
NASA Astrophysics Data System (ADS)
Fields, Brian D.
1999-04-01
Recently, there has been great progress toward observationally determining the mean star formation history of the universe. When accurately known, the cosmic star formation rate could provide much information about Galactic evolution, if the Milky Way's star formation rate is representative of the average cosmic star formation history. A simple hypothesis is that our local star formation rate is proportional to the cosmic mean. In addition, to specify a star formation history, one must also adopt an initial mass function (IMF) typically it is assumed that the IMF is a smooth function, which is constant in time. We show how to test directly the compatibility of all these assumptions by making use of the local (solar neighborhood) star formation record encoded in the present-day stellar mass function. Present data suggest that at least one of the following is false: (1) the local IMF is constant in time; (2) the local IMF is a smooth (unimodal) function; and/or (3) star formation in the Galactic disk was representative of the cosmic mean. We briefly discuss how to determine which of these assumptions fail and also improvements in observations, which will sharpen this test.
Calibration of Response Data Using MIRT Models with Simple and Mixed Structures
ERIC Educational Resources Information Center
Zhang, Jinming
2012-01-01
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
ERIC Educational Resources Information Center
Slisko, Josip; Cruz, Adrian Corona
2013-01-01
There is a general agreement that critical thinking is an important element of 21st century skills. Although critical thinking is a very complex and controversial conception, many would accept that recognition and evaluation of assumptions is a basic critical-thinking process. When students use simple mathematical model to reason quantitatively…
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Bacon, John B.; Matney, Mark
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.
Evacuation Criteria after A Nuclear Accident: A Personal Perspective
Wilson, Richard
2012-01-01
In any decision involving radiation a risk-risk or risk-benefit comparison should be done. This can be either explicit or implicit. When the adverse effect of an alternate action is less than the planned action, such as medical use of X rays or nuclear power in ordinary operation, the comparison is simple. But in this paper I argue that with the situation faced by the Japanese in Fukushima, the assumption that the risk of an alternate action is small is false. The risks of unnecessary evacuation exceeded the risk of radiation cancers hypothetically produced by staying in place. This was not realized by those that had to make a decision within hours. This realization suggests important changes, worldwide, in the guidelines for radiation protection in accident situations. PMID:23304100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cantor, R.; Schoepfle, M.
Communities at risk are confronted by an increasingly complex array of opportunities and need for involvement in decisions affecting them. Policy analysis often demands from researchers insights into the complicated process of how best to account for community involvement in decision making. Often, this requires additional understanding of how decisions are made by community members. Researchers trying to capture the important features of decision making will necessarily make assumptions regarding the rationality underlying the decision process. Two implicit and often incompatible sets of research assumptions about decision processes have emerged: outcome rationality and process rationality. Using outcome rationality, the principalmore » goal of risk research often is to predict how people will react to risk regardless of what they say they would do. Using process rationality, the research goal is to determine how people perceive the risks to which they are exposed and how perceptions actually influence responses. The former approach is associated with research in risk communication, conducted by economists and cognitive psychologists; the latter approach is associated with the field of risk negotiation and acceptance, conducted by anthropologists, some sociologists, and planners. This article describes (1) the difference between the assumptions behind outcome and process rationality regarding decision making and the problems resulting from these differences; (2) the promise and limitations of both sets of assumptions; (3) the potential contributions from cognitive psychology, cognitive ethnography, and the theory of transaction costs in reconciling the differences in assumptions and making them more complementary; and (4) the implications of such complementarity.« less
Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.
Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A
2010-07-01
We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.
Common-sense chemistry: The use of assumptions and heuristics in problem solving
NASA Astrophysics Data System (ADS)
Maeyer, Jenine Rachel
Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many interviewees seemed to view chemical reactions as macroscopic reassembling processes where favorability was related to the perceived ease with which reactants broke apart or products formed. Students also expressed spurious chemical assumptions based on the misinterpretation and overgeneralization of periodicity and electronegativity. Our findings suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts.
NDE Research At Nondestructive Measurement Science At NASA Langley
1989-06-01
our staff include: ultrasonics, nonlinear acoustics , thermal acoustics and diffusion, magnetics , fiber optics, and x-ray tomography . We have a...based on the simple assumption that acoustic waves interact with the sample and reveal "important" properties . In practice, such assumptions have...between the acoustic wave and the media. The most useful models can generally be inverted to determine the physical properties or geometry of the
State relations for a two-phase mixture of reacting explosives and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Shiro; Saburi, Tei; Ogata, Yuji
2007-10-15
To assess the assumptions behind the two phase mixture rule for reacting explosives, the shock-to-detonation transition process was calculated for high explosives using a finite difference method. An ignition and growth model and the Jones-Wilkins-Lee (JWL) equations of state were employed. The simple mixture rule assumes that the reacting explosive is a simple mixture of the reactant and product components. Four different assumptions, such as that of thermal equilibrium and isotropy, were adopted to calculate the pressure. The main purpose of this paper is to present the answer to the question of why the numerical results of shock-initiation are insensitivemore » to the assumptions adopted. The equations of state for reactants and products were assessed by considering plots of the specific internal energy E and specific volume V. If the slopes of the constant-pressure lines for both components in the E-V plane are almost the same, it is demonstrated that the numerical results are insensitive to the assumptions adopted. We have found that the relation for the specific volumes of the two components can be approximately expressed by a single curve of the specific volume of the reactant vs that of the products. We discuss this relationship in terms of the results of the numerical simulation. (author)« less
2013-01-01
Introduction One of the most important decisions that an animal has to make in its life is choosing a mate. Although most studies in sexual selection assume that mate choice is rational, this assumption has not been tested seriously. A crucial component of rationality is that animals exhibit transitive choices: if an individual prefers option A over B, and B over C, then it also prefers A over C. Results We assessed transitivity in mate choice: 40 female convict cichlids had to make a series of binary choices between males of varying size. Ninety percent of females showed transitive choices. The mean preference index was significantly higher when a female chose between their most preferred and least preferred male (male 1 vs. male 3) compared to when they chose between males of adjacent ranks (1 vs. 2 or 2 vs. 3). The results are consistent with a simple underlying preference function leading to transitive choice: females preferred males about one third larger than themselves. This rule of thumb correctly predicted which male was preferred in 67% of the cases and the ordering in binary choices in 78% of cases. Conclusions This study provides the first evidence for strong stochastic transitivity in a context of mate choice. The females exhibited ordinal preferences and the direction and magnitude of these preferences could be predicted from a simple rule. The females do not necessarily compare two males to choose the best; it is sufficient to use a self-referent evaluation. Such a simple decision rule has important implications for the evolution of the mating strategies and it is consistent with patterns of assortative mating repeatedly observed at population level. PMID:24216003
Individual differences and reasoning: a study on personality traits.
Bensi, Luca; Giusberti, Fiorella; Nori, Raffaella; Gambetti, Elisa
2010-08-01
Personality can play a crucial role in how people reason and decide. Identifying individual differences related to how we actively gather information and use evidence could lead to a better comprehension and predictability of human reasoning. Recent findings have shown that some personality traits are related to similar decision-making patterns showed by people with mental disorders. We performed research with the aim to investigate delusion-proneness, obsessive-like personality, anxiety (trait and state), and reasoning styles in individuals from the general population. We introduced personality trait and state anxiety scores in a regression model to explore specific associations with: (1) amount of data-gathered prior to making a decision; and (2) the use of confirmatory and disconfirmatory evidence. Results showed that all our independent variables were positively or negatively associated with the amount of data collected in order to make simple probabilistic decisions. Anxiety and obsessiveness were the only predictors of the weight attributed to evidence in favour or against a hypothesis. Findings were discussed in relation to theoretical assumptions, predictions, and clinical implications. Personality traits can predict peculiar ways to reason and decide that, in turn, could be involved to some extent in the formation and/or maintenance of psychological disorders.
Early Retirement Is Not the Cat's Meow. The Endpaper.
ERIC Educational Resources Information Center
Ferguson, Wayne S.
1982-01-01
Early retirement plans are perceived as being beneficial to school staff and financially advantageous to schools. Four out of the five assumptions on which these perceptions are based are incorrect. The one correct assumption is that early retirement will make affirmative action programs move ahead more rapidly. The incorrect assumptions are: (1)…
Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.
ERIC Educational Resources Information Center
Ferrante, Karlene
Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…
The Productivity Dilemma in Workplace Health Promotion
Cherniack, Martin
2015-01-01
Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374
On Maximizing Item Information and Matching Difficulty with Ability.
ERIC Educational Resources Information Center
Bickel, Peter; Buyske, Steven; Chang, Huahua; Ying, Zhiliang
2001-01-01
Examined the assumption that matching difficulty levels of test items with an examinee's ability makes a test more efficient and challenged this assumption through a class of one-parameter item response theory models. Found the validity of the fundamental assumption to be closely related to the van Zwet tail ordering of symmetric distributions (W.…
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
ERIC Educational Resources Information Center
Nachlieli, Talli; Herbst, Patricio
2009-01-01
This article reports on an investigation of how teachers of geometry perceived an episode of instruction presented to them as a case of engaging students in proving. Confirming what was hypothesized, participants found it remarkable that a teacher would allow a student to make an assumption while proving. But they perceived this episode in various…
Some European capabilities in satellite cinema exhibition
NASA Astrophysics Data System (ADS)
Bock, Wolfgang
1990-08-01
The likely performance envelope and architecture for satellite cinema systems are derived from simple practical assumptions. A case is made for possible transatlantic cooperation towards establishing a satellite cinema standard.
Boltzmann babies in the proper time measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Freivogel, Ben; Yang, I-S.
2008-05-15
After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly tomore » the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.« less
Boltzmann babies in the proper time measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Bousso, Raphael; Freivogel, Ben
After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly tomore » the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.« less
Stochastic Estimation of Arm Mechanical Impedance During Robotic Stroke Rehabilitation
Palazzolo, Jerome J.; Ferraro, Mark; Krebs, Hermano Igo; Lynch, Daniel; Volpe, Bruce T.; Hogan, Neville
2009-01-01
This paper presents a stochastic method to estimate the multijoint mechanical impedance of the human arm suitable for use in a clinical setting, e.g., with persons with stroke undergoing robotic rehabilitation for a paralyzed arm. In this context, special circumstances such as hypertonicity and tissue atrophy due to disuse of the hemiplegic limb must be considered. A low-impedance robot was used to bring the upper limb of a stroke patient to a test location, generate force perturbations, and measure the resulting motion. Methods were developed to compensate for input signal coupling at low frequencies apparently due to human–machine interaction dynamics. Data was analyzed by spectral procedures that make no assumption about model structure. The method was validated by measuring simple mechanical hardware and results from a patient's hemiplegic arm are presented. PMID:17436881
Entanglement-based Free Space Quantum Cryptography in Daylight
NASA Astrophysics Data System (ADS)
Gerhardt, Ilja; Peloso, Matthew P.; Ho, Caleb; Lamas-Linares, Antia; Kurtsiefer, Christian
2009-05-01
In quantum key distribution (QKD) two families of protocols are established: One, based on preparing and sending approximations of single photons, the other based on measurements on entangled photon pairs, which allow to establish a secret key using less assumptions on the size of a Hilbert space. The larger optical bandwidth of photon pairs in comparison with light used for the first family makes establishing a free space link challenging. We present a complete entanglement based QKD system following the BBM92 protocol, which generates a secure key continuously 24 hours a day between distant parties. Spectral, spatial and temporal filtering schemes were introduced to a previous setup, suppressing more than 30,B of background. We are able to establish the link during daytime, and have developed an algorithm to start and maintain time synchronization with simple crystal oscillators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galeta, Leonardo; Pirjol, Dan; Schat, Carlos
2009-12-01
We show how to match the Isgur-Karl model to the spin-flavor quark operator expansion used in the 1/N{sub c} studies of the nonstrange negative parity L=1 excited baryons. Using the transformation properties of states and interactions under the permutation group S{sub 3} we are able to express the operator coefficients as overlap integrals, without making any assumption on the spatial dependence of the quark wave functions. The general mass operator leads to parameter free mass relations and constraints on the mixing angles that are valid beyond the usual harmonic oscillator approximation. The Isgur-Karl model with harmonic oscillator wave functions providesmore » a simple counterexample that demonstrates explicitly that the alternative operator basis for the 1/N{sub c} expansion for excited baryons recently proposed by Matagne and Stancu is incomplete.« less
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
Model independent approach to the single photoelectron calibration of photomultiplier tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saldanha, R.; Grandi, L.; Guardincerri, Y.
2017-08-01
The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less
Understanding diversity–stability relationships: towards a unified model of portfolio effects
Thibaut, Loïc M; Connolly, Sean R; He, Fangliang
2013-01-01
A major ecosystem effect of biodiversity is to stabilise assemblages that perform particular functions. However, diversity–stability relationships (DSRs) are analysed using a variety of different population and community properties, most of which are adopted from theory that makes several restrictive assumptions that are unlikely to be reflected in nature. Here, we construct a simple synthesis and generalisation of previous theory for the DSR. We show that community stability is a product of two quantities: the synchrony of population fluctuations, and an average species-level population stability that is weighted by relative abundance. Weighted average population stability can be decomposed to consider effects of the mean-variance scaling of abundance, changes in mean abundance with diversity and differences in species' mean abundance in monoculture. Our framework makes explicit how unevenness in the abundances of species in real communities influences the DSR, which occurs both through effects on community synchrony, and effects on weighted average population variability. This theory provides a more robust framework for analysing the results of empirical studies of the DSR, and facilitates the integration of findings from real and model communities. PMID:23095077
Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba
2013-12-01
Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.
Numerical simulation of NQR/NMR: Applications in quantum computing.
Possa, Denimar; Gaudio, Anderson C; Freitas, Jair C C
2011-04-01
A numerical simulation program able to simulate nuclear quadrupole resonance (NQR) as well as nuclear magnetic resonance (NMR) experiments is presented, written using the Mathematica package, aiming especially applications in quantum computing. The program makes use of the interaction picture to compute the effect of the relevant nuclear spin interactions, without any assumption about the relative size of each interaction. This makes the program flexible and versatile, being useful in a wide range of experimental situations, going from NQR (at zero or under small applied magnetic field) to high-field NMR experiments. Some conditions specifically required for quantum computing applications are implemented in the program, such as the possibility of use of elliptically polarized radiofrequency and the inclusion of first- and second-order terms in the average Hamiltonian expansion. A number of examples dealing with simple NQR and quadrupole-perturbed NMR experiments are presented, along with the proposal of experiments to create quantum pseudopure states and logic gates using NQR. The program and the various application examples are freely available through the link http://www.profanderson.net/files/nmr_nqr.php. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio
We study the Hilbert space structure of classical spacetimes under the assumption that entanglement in holographic theories determines semiclassical geometry. We show that this simple assumption has profound implications; for example, a superposition of classical spacetimes may lead to another classical spacetime. Despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. We illustrate these ideas using a model for the holographic theory of cosmological spacetimes.
Where Are All of the Gas-bearing Local Dwarf Galaxies? Quantifying Possible Impacts of Reionization
NASA Astrophysics Data System (ADS)
Tollerud, Erik J.; Peek, J. E. G.
2018-04-01
We present an approach for comparing the detections and non-detections of Local Group (LG) dwarf galaxies in large H I surveys to the predictions of a suite of n-body simulations of the LG. This approach depends primarily on a set of empirical scaling relations to connect the simulations to the observations, rather than making strong theoretical assumptions. We then apply this methodology to the Galactic Arecibo L-band Feed Array Hi (GALFA-HI) Compact Cloud Catalog (CCC), and compare it to the suite Exploring the Local Volume In Simulations (ELVIS) of simulations. This approach reveals a strong tension between the naïve results of the model and the observations: while there are no LG dwarfs in the GALFA-HI CCC, the simulations predict ∼10. Applying a simple model of reionization can resolve this tension by preventing low-mass halos from forming gas. However, and if this effect operates as expected, the observations provide a constraint on the mass scale of the dwarf galaxy that reionization impacts. Combined with the observed properties of Leo T, the halo virial mass scale at which reionization impacts dwarf galaxy gas content is constrained to be ∼ {10}8.5 {M}ȯ , independent of any assumptions about star formation.
Supersymmetry from typicality: TeV-scale gauginos and PeV-scale squarks and sleptons.
Nomura, Yasunori; Shirai, Satoshi
2014-09-12
We argue that under a set of simple assumptions the multiverse leads to low-energy supersymmetry with the spectrum often called spread or minisplit supersymmetry: the gauginos are in the TeV region with the other superpartners 2 or 3 orders of magnitude heavier. We present a particularly simple realization of supersymmetric grand unified theory using this idea.
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
A class of simple bouncing and late-time accelerating cosmologies in f(R) gravity
NASA Astrophysics Data System (ADS)
Kuiroukidis, A.
We consider the field equations for a flat FRW cosmological model, given by Eq. (??), in an a priori generic f(R) gravity model and cast them into a, completely normalized and dimensionless, system of ODEs for the scale factor and the function f(R), with respect to the scalar curvature R. It is shown that under reasonable assumptions, namely for power-law functional form for the f(R) gravity model, one can produce simple analytical and numerical solutions describing bouncing cosmological models where in addition there are late-time accelerating. The power-law form for the f(R) gravity model is typically considered in the literature as the most concrete, reasonable, practical and viable assumption [see S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 90 (2014) 124083, arXiv:1410.8183 [gr-qc
The radiated noise from isotropic turbulence revisited
NASA Technical Reports Server (NTRS)
Lilley, Geoffrey M.
1993-01-01
The noise radiated from isotropic turbulence at low Mach numbers and high Reynolds numbers, as derived by Proudman (1952), was the first application of Lighthill's Theory of Aerodynamic Noise to a complete flow field. The theory presented by Proudman involves the assumption of the neglect of retarded time differences and so replaces the second-order retarded-time and space covariance of Lighthill's stress tensor, Tij, and in particular its second time derivative, by the equivalent simultaneous covariance. This assumption is a valid approximation in the derivation of the second partial derivative of Tij/derivative of t exp 2 covariance at low Mach numbers, but is not justified when that covariance is reduced to the sum of products of the time derivatives of equivalent second-order velocity covariances as required when Gaussian statistics are assumed. The present paper removes these assumptions and finds that although the changes in the analysis are substantial, the change in the numerical result for the total acoustic power is small. The present paper also considers an alternative analysis which does not neglect retarded times. It makes use of the Lighthill relationship, whereby the fourth-order Tij retarded-time covariance is evaluated from the square of similar second order covariance, which is assumed known. In this derivation, no statistical assumptions are involved. This result, using distributions for the second-order space-time velocity squared covariance based on the Direct Numerical Simulation (DNS) results of both Sarkar and Hussaini(1993) and Dubois(1993), is compared with the re-evaluation of Proudman's original model. These results are then compared with the sound power derived from a phenomenological model based on simple approximations to the retarded-time/space covariance of Txx. Finally, the recent numerical solutions of Sarkar and Hussaini(1993) for the acoustic power are compared with the results obtained from the analytic solutions.
The Heterogeneous Investment Horizon and Dynamic Strategies for Asset Allocation
NASA Astrophysics Data System (ADS)
Xiong, Heping; Xu, Yiheng; Xiao, Yi
This paper discusses the influence of the portfolio rebalancing strategy on the efficiency of long-term investment portfolios under the assumption of independent stationary distribution of returns. By comparing the efficient sets of the stochastic rebalancing strategy, the simple rebalancing strategy and the buy-and-hold strategy with specific data examples, we find that the stochastic rebalancing strategy is optimal, while the simple rebalancing strategy is of the lowest efficiency. In addition, the simple rebalancing strategy lowers the efficiency of the portfolio instead of improving it.
Hot spots in the microwave sky
NASA Technical Reports Server (NTRS)
Vittorio, Nicola; Juszkiewicz, Roman
1987-01-01
Tha assumption that the cosmic background fluctuations can be approximated as a random Gaussian field implies specific predictions for the radiation temperature pattern. Using this assumption, the abundances and angular sizes are calculated for regions of various levels of brightness expected to appear in the sky. Different observational strategies are assessed in the context of these results. Calculations for both large-angle and small-angle anisotropy generated by scale-invariant fluctuations in a flat universe are presented. Also discussed are simple generalizations to open cosmological models.
Cognitive-psychology expertise and the calculation of the probability of a wrongful conviction.
Rouder, Jeffrey N; Wixted, John T; Christenfeld, Nicholas J S
2018-05-08
Cognitive psychologists are familiar with how their expertise in understanding human perception, memory, and decision-making is applicable to the justice system. They may be less familiar with how their expertise in statistical decision-making and their comfort working in noisy real-world environments is just as applicable. Here we show how this expertise in ideal-observer models may be leveraged to calculate the probability of guilt of Gary Leiterman, a man convicted of murder on the basis of DNA evidence. We show by common probability theory that Leiterman is likely a victim of a tragic contamination event rather than a murderer. Making any calculation of the probability of guilt necessarily relies on subjective assumptions. The conclusion about Leiterman's innocence is not overly sensitive to the assumptions-the probability of innocence remains high for a wide range of reasonable assumptions. We note that cognitive psychologists may be well suited to make these calculations because as working scientists they may be comfortable with the role a reasonable degree of subjectivity plays in analysis.
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Access to resources shapes maternal decision making: evidence from a factorial vignette experiment.
Kushnick, Geoff
2013-01-01
The central assumption of behavioral ecology is that natural selection has shaped individuals with the capacity to make decisions that balance the fitness costs and benefits of behavior. A number of factors shape the fitness costs and benefits of maternal care, but we lack a clear understanding how they, taken together, play a role in the decision-making process. In animal studies, the use of experimental methods has allowed for the tight control of these factors. Standard experimentation is inappropriate in human behavioral ecology, but vignette experiments may solve the problem. I used a confounded factorial vignette experiment to gather 640 third-party judgments about the maternal care decisions of hypothetical women and their children from 40 female karo Batak respondents in rural Indonesia. This allowed me to test hypotheses derived from parental investment theory about the relative importance of five binary factors in shaping maternal care decisions with regard to two distinct scenarios. As predicted, access to resources--measured as the ability of a woman to provide food for her children--led to increased care. A handful of other factors conformed to prediction, but they were inconsistent across scenarios. The results suggest that mothers may use simple heuristics, rather than a full accounting for costs and benefits, to make decisions about maternal care. Vignettes have become a standard tool for studying decision making, but have made only modest inroads to evolutionarily informed studies of human behavior.
What difference do brain images make in US criminal trials?
Hardcastle, Valerie Gray; Lamb, Edward
2018-05-09
One of the early concerns regarding the use of neuroscience data in criminal trials is that even if the brain images are ambiguous or inconclusive, they still might influence a jury in virtue of the fact that they appear easy to understand. By appearing visually simple, even though they are really statistically constructed maps with a host of assumptions built into them, a lay jury or a judge might take brain scans to be more reliable or relevant than they actually are. Should courts exclude brain scans for being more prejudicial than probative? Herein, we rehearse a brief history of brain scans admitted into criminal trials in the United States, then describe the results of a recent analysis of appellate court decisions that referenced 1 or more brain scans in the judicial decision. In particular, we aim to explain how courts use neuroscience imaging data: Do they interpret the data correctly? Does it seem that scans play an oversized role in judicial decision-making? And have they changed how criminal defendants are judged? It is our hope that in answering these questions, clinicians and defence attorneys will be able to make better informed decisions regarding about how to manage those incarcerated. © 2018 John Wiley & Sons, Ltd.
Adversity magnifies the importance of social information in decision-making.
Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G
2017-11-01
Decision-making theories explain animal behaviour, including human behaviour, as a response to estimations about the environment. In the case of collective behaviour, they have given quantitative predictions of how animals follow the majority option. However, they have so far failed to explain that in some species and contexts social cohesion increases when conditions become more adverse (i.e. individuals choose the majority option with higher probability when the estimated quality of all available options decreases). We have found that this failure is due to modelling simplifications that aided analysis, like low levels of stochasticity or the assumption that only one choice is the correct one. We provide a more general but simple geometric framework to describe optimal or suboptimal decisions in collectives that gives insight into three different mechanisms behind this effect. The three mechanisms have in common that the private information acts as a gain factor to social information: a decrease in the privately estimated quality of all available options increases the impact of social information, even when social information itself remains unchanged. This increase in the importance of social information makes it more likely that agents will follow the majority option. We show that these results quantitatively explain collective behaviour in fish and experiments of social influence in humans. © 2017 The Authors.
Non-cooperative game theory in biology and cooperative reasoning in humans.
Kabalak, Alihan; Smirnova, Elena; Jost, Jürgen
2015-06-01
The readiness for spontaneous cooperation together with the assumptions that others share this cooperativity has been identified as a fundamental feature that distinguishes humans from other animals, including the great apes. At the same time, cooperativity presents an evolutionary puzzle because non-cooperators do better in a group of cooperators. We develop here an analysis of the process leading to cooperation in terms of rationality concepts, game theory and epistemic logic. We are, however, not attempting to reconstruct the actual evolutionary process. We rather want to provide the logical structure underlying cooperation in order to understand why cooperation is possible and what kind of reasoning and beliefs would lead to cooperative decision-making. Game theory depends on an underlying common belief in non-cooperative rationality of the players, and cooperativity similarly can utilize a common belief in cooperative rationality as its basis. We suggest a weaker concept of rational decision-making in games that encompasses both types of decision-making. We build this up in stages, starting from simple optimization, then using anticipation of the reaction of others, to finally arrive at reflexive and cooperative reasoning. While each stage is more difficult than the preceding, importantly, we also identify a reduction of complexity achieved by the consistent application of higher stage reasoning.
ERIC Educational Resources Information Center
Beal, Christine
1992-01-01
Describes typical differences in conversational routines in French and Australian English and kinds of tensions arising when speakers with two different sets of rules come into contact. Even simple questions contain a variety of assumptions ranging from whom it is suitable to ask to the kind of answer or the amount of detail that is expected. (13…
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Ways to Help Divorced Parents Communicate Better on Behalf of Their Children.
ERIC Educational Resources Information Center
Marston, Stephanie
1994-01-01
Five keys to effective communication for divorced parents raising their children include being clear about what they want, keeping it simple, being businesslike, avoiding assumptions, and staying in the present. (SM)
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Analyses of School Commuting Data for Exposure Modeling Purposes
Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...
The problem of the second wind turbine - a note on a common but flawed wind power estimation method
NASA Astrophysics Data System (ADS)
Gans, F.; Miller, L. M.; Kleidon, A.
2012-06-01
Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than detailed engineering specifications of wind turbine design and placement.
Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.
Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N
2008-10-10
The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America
Biology as population dynamics: heuristics for transmission risk.
Keebler, Daniel; Walwyn, David; Welte, Alex
2013-02-01
Population-type models, accounting for phenomena such as population lifetimes, mixing patterns, recruitment patterns, genetic evolution and environmental conditions, can be usefully applied to the biology of HIV infection and viral replication. A simple dynamic model can explore the effect of a vaccine-like stimulus on the mortality and infectiousness, which formally looks like fertility, of invading virions; the mortality of freshly infected cells; and the availability of target cells, all of which impact on the probability of infection. Variations on this model could capture the importance of the timing and duration of different key events in viral transmission, and hence be applied to questions of mucosal immunology. The dynamical insights and assumptions of such models are compatible with the continuum of between- and within-individual risks in sexual violence and may be helpful in making sense of the sparse data available on the association between HIV transmission and sexual violence. © 2012 John Wiley & Sons A/S.
Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.
Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I
2007-12-01
A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.
Vernon, John A; Hughen, W Keener; Johnson, Scott J
2005-05-01
In the face of significant real healthcare cost inflation, pressured budgets, and ongoing launches of myriad technology of uncertain value, payers have formalized new valuation techniques that represent a barrier to entry for drugs. Cost-effectiveness analysis predominates among these methods, which involves differencing a new technological intervention's marginal costs and benefits with a comparator's, and comparing the resulting ratio to a payer's willingness-to-pay threshold. In this paper we describe how firms are able to model the feasible range of future product prices when making in-licensing and developmental Go/No-Go decisions by considering payers' use of the cost-effectiveness method. We illustrate this analytic method with a simple deterministic example and then incorporate stochastic assumptions using both analytic and simulation methods. Using this strategic approach, firms may reduce product development and in-licensing risk.
NASA Astrophysics Data System (ADS)
Worster, Grae; Huppert, Herbert; Robison, Rosalyn; Nandkishore, Rahul; Rajah, Luke
2008-11-01
We have used simple laboratory experiments with viscous fluids to explore the dynamics of grounding lines between Antarctic marine ice sheets and the freely floating ice shelves into which they develop. Ice sheets are shear-dominated gravity currents, while ice shelves are extensional gravity currents with zero shear to leading order. Though ice sheets have non-Newtonian rheology, fundamental aspects of their flow can be explored using Newtonian fluid mechanics. We have derived a mathematical model of this flow that incorporates a new dynamic boundary condition for the position of the grounding line, where the gravity current loses contact with the solid base. Good agreement between our theoretical predictions and our experimental measurements, made using gravity currents of syrup flowing down a rigid slope into a deep, dense salt solution, gives confidence in the fundamental assumptions of our model, which can be incorporated into shallow-ice models to make important predictions regarding the dynamical stability of marine ice sheets.
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabitov, I Kh
The subject of this article is one of the most important questions of classical geometry: the theory of bendings and infinitesimal bendings of surfaces. These questions are studied for surfaces of revolution and, unlike previous well-known works, we make only minimal smoothness assumptions (the class C{sup 1}) in the initial part of our study. In this class we prove local existence and uniqueness theorems for infinitesimal bendings. We then consider the analytic class and establish simple criteria for rigidity and inflexibility of compact surfaces. These criteria depend on the values of certain integer characteristics related to the order of flattening ofmore » the surface at its poles. We also show that in the nonanalytic situation there exist nonrigid surfaces with any given order of flattening at the poles. Bibliography: 22 titles.« less
The overconstraint of response time models: rethinking the scaling problem.
Donkin, Chris; Brown, Scott D; Heathcote, Andrew
2009-12-01
Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.
Quantifying the effects of social influence
Mavrodiev, Pavlin; Tessone, Claudio J.; Schweitzer, Frank
2013-01-01
How do humans respond to indirect social influence when making decisions? We analysed an experiment where subjects had to guess the answer to factual questions, having only aggregated information about the answers of others. While the response of humans to aggregated information is a widely observed phenomenon, it has not been investigated quantitatively, in a controlled setting. We found that the adjustment of individual guesses depends linearly on the distance to the mean of all guesses. This is a remarkable, and yet surprisingly simple regularity. It holds across all questions analysed, even though the correct answers differ by several orders of magnitude. Our finding supports the assumption that individual diversity does not affect the response to indirect social influence. We argue that the nature of the response crucially changes with the level of information aggregation. This insight contributes to the empirical foundation of models for collective decisions under social influence. PMID:23449043
Gene Ontology: Pitfalls, Biases, and Remedies.
Gaudet, Pascale; Dessimoz, Christophe
2017-01-01
The Gene Ontology (GO) is a formidable resource, but there are several considerations about it that are essential to understand the data and interpret it correctly. The GO is sufficiently simple that it can be used without deep understanding of its structure or how it is developed, which is both a strength and a weakness. In this chapter, we discuss some common misinterpretations of the ontology and the annotations. A better understanding of the pitfalls and the biases in the GO should help users make the most of this very rich resource. We also review some of the misconceptions and misleading assumptions commonly made about GO, including the effect of data incompleteness, the importance of annotation qualifiers, and the transitivity or lack thereof associated with different ontology relations. We also discuss several biases that can confound aggregate analyses such as gene enrichment analyses. For each of these pitfalls and biases, we suggest remedies and best practices.
Test images for the maximum entropy image restoration method
NASA Technical Reports Server (NTRS)
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Deconstructing Community for Conservation: Why Simple Assumptions are Not Sufficient.
Waylen, Kerry Ann; Fischer, Anke; McGowan, Philip J K; Milner-Gulland, E J
2013-01-01
Many conservation policies advocate engagement with local people, but conservation practice has sometimes been criticised for a simplistic understanding of communities and social context. To counter this, this paper explores social structuring and its influences on conservation-related behaviours at the site of a conservation intervention near Pipar forest, within the Seti Khola valley, Nepal. Qualitative and quantitative data from questionnaires and Rapid Rural Appraisal demonstrate how links between groups directly and indirectly influence behaviours of conservation relevance (including existing and potential resource-use and proconservation activities). For low-status groups the harvesting of resources can be driven by others' preference for wild foods, whilst perceptions of elite benefit-capture may cause reluctance to engage with future conservation interventions. The findings reiterate the need to avoid relying on simple assumptions about 'community' in conservation, and particularly the relevance of understanding relationships between groups, in order to understand natural resource use and implications for conservation.
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
The Doctor Is In! Diagnostic Analysis.
Jupiter, Daniel C
To make meaningful inferences based on our regression models, we must ensure that we have met the necessary assumptions of these tests. In this commentary, we review these assumptions and those for the t-test and analysis of variance, and introduce a variety of methods, formal and informal, numeric and visual, for assessing conformity with the assumptions. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Publish unexpected results that conflict with assumptions
USDA-ARS?s Scientific Manuscript database
Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...
Of mental models, assumptions and heuristics: The case of acids and acid strength
NASA Astrophysics Data System (ADS)
McClary, Lakeisha Michelle
This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.
Elf, Johan
2016-04-27
A new, game-changing approach makes it possible to rigorously disprove models without making assumptions about the unknown parts of the biological system. Copyright © 2016 Elsevier Inc. All rights reserved.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Stirling Engine External Heat System Design with Heat Pipe Heater.
1986-07-01
Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the
Edemagenic gain and interstitial fluid volume regulation.
Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A
2008-02-01
Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.
Renewable resources in the chemical industry--breaking away from oil?
Nordhoff, Stefan; Höcker, Hans; Gebhardt, Henrike
2007-12-01
Rising prices for fossil-based raw materials suggest that sooner or later renewable raw materials will, in principle, become economically viable. This paper examines this widespread paradigm. Price linkages like those seen for decades particularly in connection with petrochemical raw materials are now increasingly affecting renewable raw materials. The main driving force is the competing utilisation as an energy source because both fossil-based and renewable raw materials are used primarily for heat, electrical power and mobility. As a result, prices are determined by energy utilisation. Simple observations show how prices for renewable carbon sources are becoming linked to the crude oil price. Whether the application calls for sugar, starch, virgin oils or lignocellulose, the price for the raw material rises with the oil price. Consequently, expectations regarding price trends for fossil-based energy sources can also be utilised for the valuation of alternative processes. However, this seriously calls into question the assumption that a rising crude oil price will favour the economic viability of alternative products and processes based on renewable raw materials. Conversely, it follows that these products and processes must demonstrate economic viability today. Especially in connection with new approaches in white biotechnology, it is evident that, under realistic assumptions, particularly in terms of achievable yields and the optimisation potential of the underlying processes, the route to utilisation is economically viable. This makes the paradigm mentioned at the outset at least very questionable.
Trustworthiness of detectors in quantum key distribution with untrusted detectors
Qi, Bing
2015-02-25
Measurement-device-independent quantum key distribution (MDI-QKD) protocol has been demonstrated as a viable solution to detector side-channel attacks. One of the main advantages of MDI-QKD is that the security can be proved without making any assumptions about how the measurement device works. The price to pay is the relatively low secure key rate comparing with conventional quantum key distribution (QKD), such as the decoy-state BB84 protocol. Recently a new QKD protocol, aiming at bridging the strong security of MDI-QKD with the high e ciency of conventional QKD, has been proposed. In this protocol, the legitimate receiver employs a trusted linear opticsmore » network to encode information on photons received from an insecure quantum channel, and then performs a Bell state measurement (BSM) using untrusted detectors. One crucial assumption made in most of these studies is that the untrusted BSM located inside the receiver's laboratory cannot send any unwanted information to the outside. Here in this paper, we show that if the BSM is completely untrusted, a simple scheme would allow the BSM to send information to the outside. Combined with Trojan horse attacks, this scheme could allow Eve to gain information of the quantum key without being detected. Ultimately, to prevent the above attack, either countermeasures to Trojan horse attacks or some trustworthiness to the "untrusted" BSM device is required.« less
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Assessing the fit of site-occupancy models
MacKenzie, D.I.; Bailey, L.L.
2004-01-01
Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.
Pereira, Luis M
2010-06-01
Pharmacokinetics (PK) has been traditionally dealt with under the homogeneity assumption. However, biological systems are nowadays comprehensively understood as being inherently fractal. Specifically, the microenvironments where drug molecules interact with membrane interfaces, metabolic enzymes or pharmacological receptors, are unanimously recognized as unstirred, space-restricted, heterogeneous and geometrically fractal. Therefore, classical Fickean diffusion and the notion of the compartment as a homogeneous kinetic space must be revisited. Diffusion in fractal spaces has been studied for a long time making use of fractional calculus and expanding on the notion of dimension. Combining this new paradigm with the need to describe and explain experimental data results in defining time-dependent rate constants with a characteristic fractal exponent. Under the one-compartment simplification this strategy is straightforward. However, precisely due to the heterogeneity of the underlying biology, often at least a two-compartment model is required to address macroscopic data such as drug concentrations. This simple modelling step-up implies significant analytical and numerical complications. However, a few methods are available that make possible the original desideratum. In fact, exploring the full range of parametric possibilities and looking at different drugs and respective biological concentrations, it may be concluded that all PK modelling approaches are indeed particular cases of the fractal PK theory.
The EPR paradox, Bell's inequality, and the question of locality
NASA Astrophysics Data System (ADS)
Blaylock, Guy
2010-01-01
Most physicists agree that the Einstein-Podolsky-Rosen-Bell paradox exemplifies much of the strange behavior of quantum mechanics, but argument persists about what assumptions underlie the paradox. To clarify what the debate is about, we employ a simple and well-known thought experiment involving two correlated photons to help us focus on the logical assumptions needed to construct the EPR and Bell arguments. The view presented in this paper is that the minimal assumptions behind Bell's inequality are locality and counterfactual definiteness but not scientific realism, determinism, or hidden variables as are often suggested. We further examine the resulting constraints on physical theory with an illustration from the many-worlds interpretation of quantum mechanics—an interpretation that we argue is deterministic, local, and realist but that nonetheless violates the Bell inequality.
NASA Technical Reports Server (NTRS)
Gordon, Diana F.
1992-01-01
Selecting a good bias prior to concept learning can be difficult. Therefore, dynamic bias adjustment is becoming increasingly popular. Current dynamic bias adjustment systems, however, are limited in their ability to identify erroneous assumptions about the relationship between the bias and the target concept. Without proper diagnosis, it is difficult to identify and then remedy faulty assumptions. We have developed an approach that makes these assumptions explicit, actively tests them with queries to an oracle, and adjusts the bias based on the test results.
Analytic derivation of bacterial growth laws from a simple model of intracellular chemical dynamics.
Pandey, Parth Pratim; Jain, Sanjay
2016-09-01
Experiments have found that the growth rate and certain other macroscopic properties of bacterial cells in steady-state cultures depend upon the medium in a surprisingly simple manner; these dependencies are referred to as 'growth laws'. Here we construct a dynamical model of interacting intracellular populations to understand some of the growth laws. The model has only three population variables: an amino acid pool, a pool of enzymes that transport an external nutrient and produce the amino acids, and ribosomes that catalyze their own and the enzymes' production from the amino acids. We assume that the cell allocates its resources between the enzyme sector and the ribosomal sector to maximize its growth rate. We show that the empirical growth laws follow from this assumption and derive analytic expressions for the phenomenological parameters in terms of the more basic model parameters. Interestingly, the maximization of the growth rate of the cell as a whole implies that the cell allocates resources to the enzyme and ribosomal sectors in inverse proportion to their respective 'efficiencies'. The work introduces a mathematical scheme in which the cellular growth rate can be explicitly determined and shows that two large parameters, the number of amino acid residues per enzyme and per ribosome, are useful for making approximations.
Trujillo, Caleb; Cooper, Melanie M; Klymkowsky, Michael W
2012-01-01
Biological systems, from the molecular to the ecological, involve dynamic interaction networks. To examine student thinking about networks we used graphical responses, since they are easier to evaluate for implied, but unarticulated assumptions. Senior college level molecular biology students were presented with simple molecular level scenarios; surprisingly, most students failed to articulate the basic assumptions needed to generate reasonable graphical representations; their graphs often contradicted their explicit assumptions. We then developed a tiered Socratic tutorial based on leading questions designed to provoke metacognitive reflection. The activity is characterized by leading questions (prompts) designed to provoke meta-cognitive reflection. When applied in a group or individual setting, there was clear improvement in targeted areas. Our results highlight the promise of using graphical responses and Socratic prompts in a tutorial context as both a formative assessment for students and an informative feedback system for instructors, in part because graphical responses are relatively easy to evaluate for implied, but unarticulated assumptions. Copyright © 2011 Wiley Periodicals, Inc.
Effect of source tampering in the security of quantum cryptography
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Xu, Feihu; Jiang, Mu-Sheng; Ma, Xiang-Chun; Lo, Hoi-Kwong; Liang, Lin-Mei
2015-08-01
The security of source has become an increasingly important issue in quantum cryptography. Based on the framework of measurement-device-independent quantum key distribution (MDI-QKD), the source becomes the only region exploitable by a potential eavesdropper (Eve). Phase randomization is a cornerstone assumption in most discrete-variable (DV) quantum communication protocols (e.g., QKD, quantum coin tossing, weak-coherent-state blind quantum computing, and so on), and the violation of such an assumption is thus fatal to the security of those protocols. In this paper, we show a simple quantum hacking strategy, with commercial and homemade pulsed lasers, by Eve that allows her to actively tamper with the source and violate such an assumption, without leaving a trace afterwards. Furthermore, our attack may also be valid for continuous-variable (CV) QKD, which is another main class of QKD protocol, since, excepting the phase random assumption, other parameters (e.g., intensity) could also be changed, which directly determine the security of CV-QKD.
ERIC Educational Resources Information Center
Rosner, Burton S.; Kochanski, Greg
2009-01-01
Signal detection theory (SDT) makes the frequently challenged assumption that decision criteria have no variance. An extended model, the Law of Categorical Judgment, relaxes this assumption. The long accepted equation for the law, however, is flawed: It can generate negative probabilities. The correct equation, the Law of Categorical Judgment…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-08
... DEPARTMENT OF JUSTICE [Docket No. OTJ 100] Solicitation of Comments on Request for United States Assumption of Concurrent Federal Criminal Jurisdiction; Hoopa Valley Tribe Correction In notice document 2012-09731 beginning on page 24517 the issue of Tuesday, April 24, 2012 make the following correction: On...
The Effect of Missing Data Treatment on Mantel-Haenszel DIF Detection
ERIC Educational Resources Information Center
Emenogu, Barnabas C.; Falenchuk, Olesya; Childs, Ruth A.
2010-01-01
Most implementations of the Mantel-Haenszel differential item functioning procedure delete records with missing responses or replace missing responses with scores of 0. These treatments of missing data make strong assumptions about the causes of the missing data. Such assumptions may be particularly problematic when groups differ in their patterns…
Causal Models with Unmeasured Variables: An Introduction to LISREL.
ERIC Educational Resources Information Center
Wolfle, Lee M.
Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-25
... actuarial and economic assumptions and methods by which Trustees might more accurately project health... (a)(2)). The Panel will discuss the long range (75 year) projection methods and assumptions in... making recommendations to the Medicare Trustees on how the Trustees might more accurately project health...
Social-Psychological Factors Influencing Recreation Demand: Evidence from Two Recreational Rivers
ERIC Educational Resources Information Center
Smith, Jordan W.; Moore, Roger L.
2013-01-01
Traditional methods of estimating demand for recreation areas involve making inferences about individuals' preferences. Frequently, the assumption is made that recreationists' cost of traveling to a site is a reliable measure of the value they place on that resource and the recreation opportunities it provides. This assumption may ignore other…
Temporal Aggregation and Testing For Timber Price Behavior
Jeffrey P. Prestemon; John M. Pye; Thomas P. Holmes
2004-01-01
Different harvest timing models make different assumptions about timber price behavior. Those seeking to optimize harvest timing are thus first faced with a decision regarding which assumption of price behavior is appropriate for their market, particularly regarding the presence of a unit root in the timber price time series. Unfortunately for landowners and investors...
Globalization, decision making and taboo in nursing.
Keighley, T
2012-06-01
This paper is a reflection on the representation of nurses and their practice at a global level. In considering the International Council of Nurses (ICN) conference in Malta (2011), it is clear that certain assumptions have been made about nurses and their practice which assume that globalization is under way for the whole of the profession and that the assumptions can be applied equally around the world. These assumptions appear in many ways to be implicit rather than explicit. The implicitness of the assumptions is examined against the particular decision-making processes adopted by the ICN. An attempt is then made to identify another base for the ongoing global work of the ICN. This involves the exploration of taboo (that which is forbidden because it is either holy or unclean) as a way of examining why nursing is not properly valued, despite years of international representation. The paper concludes with some thoughts on how such a new approach interfaces with the possibilities held out by new information technologies. © 2011 The Author. International Nursing Review © 2011 International Council of Nurses.
Ashby, Nathaniel J S; Glöckner, Andreas; Dickert, Stephan
2011-01-01
Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one's options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of Unconscious Thought Theory using a classic risky choice paradigm and including a "deliberation with information" condition. Although we replicate an advantage for unconscious thought (UT) over "deliberation without information," we find that "deliberation with information" equals or outperforms UT in risky choices. These results speak against the generality of the assumption that UT has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. Furthermore, we show that "deliberate thought with information" leads to more differentiated knowledge compared to UT which speaks against the generality of the appropriate weighting assumption.
NASA Astrophysics Data System (ADS)
Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst
2017-11-01
Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.
A simple test procedure for evaluating low temperature crack resistance of asphalt concrete.
DOT National Transportation Integrated Search
2009-11-01
The current means of evaluating the low temperature cracking resistance of HMA relies on extensive test : methods that require assumptions about material behaviors and the use of complicated loading equipment. The purpose : of this study was to devel...
Analysis of environmental regulatory proposals: Its your chance to influence policy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veil, J.A.
1994-03-02
As part of the regulatory development process, the US Envirorunental Protection Agency (EPA) collects data, makes various assumptions about the data, and analyzes the data. Although EPA acts in good faith, the agency cannot always be aware of all relevant data, make only appropriate assumptions, and use applicable analytical methods. Regulated industries must carefully must carefully review every component of the regulatory decision-making process to identify misunderstandings and errors and to supply additional data that is relevant to the regulatory action. This paper examines three examples of how EPA`s data, assumptions, and analytical methods have been critiqued. The first twomore » examples involve EPA`s cost-effectiveness (CE) analyses prepared for the offshore oil and gas effluent limitations guidelines and as part of EPA Region 6`s general permit for coastal waters of Texas and Louisiana. A CE analysis regulations to the incremental amount of pollutants that would be removed by the recommended treatment processes. The third example, although not involving a CE analysis, demonstrates how the use of non-representative data can influence the outcome of an analysis.« less
Privacy-preserving heterogeneous health data sharing.
Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila
2013-05-01
Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis.
Privacy-preserving heterogeneous health data sharing
Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila
2013-01-01
Objective Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. Methods The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. Results We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. Limitation The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Conclusions Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis. PMID:23242630
Simply criminal: predicting burglars' occupancy decisions with a simple heuristic.
Snook, Brent; Dhami, Mandeep K; Kavanagh, Jennifer M
2011-08-01
Rational choice theories of criminal decision making assume that offenders weight and integrate multiple cues when making decisions (i.e., are compensatory). We tested this assumption by comparing how well a compensatory strategy called Franklin's Rule captured burglars' decision policies regarding residence occupancy compared to a non-compensatory strategy (i.e., Matching Heuristic). Forty burglars each decided on the occupancy of 20 randomly selected photographs of residences (for which actual occupancy was known when the photo was taken). Participants also provided open-ended reports on the cues that influenced their decisions in each case, and then rated the importance of eight cues (e.g., deadbolt visible) over all decisions. Burglars predicted occupancy beyond chance levels. The Matching Heuristic was a significantly better predictor of burglars' decisions than Franklin's Rule, and cue use in the Matching Heuristic better corresponded to the cue ecological validities in the environment than cue use in Franklin's Rule. The most important cue in burglars' models was also the most ecologically valid or predictive of actual occupancy (i.e., vehicle present). The majority of burglars correctly identified the most important cue in their models, and the open-ended technique showed greater correspondence between self-reported and captured cue use than the rating over decision technique. Our findings support a limited rationality perspective to understanding criminal decision making, and have implications for crime prevention.
The Nonlinear Dynamic Response of an Elastic-Plastic Thin Plate under Impulsive Loading,
1987-06-11
Among those numerical methods, the finite element method is the most effective one. The method presented in this paper is an " influence function " numerical...computational time is much less than the finite element method. Its precision is higher also. II. Basic Assumption and the Influence Function of a Simple...calculation. Fig. 1 3 2. The Influence function of a Simple Supported Plate The motion differential equation of a thin plate can be written as DV’w+ _.eluq() (1
Effect of steam addition on cycle performance of simple and recuperated gas turbines
NASA Technical Reports Server (NTRS)
Boyle, R. J.
1979-01-01
Results are presented for the cycle efficiency and specific power of simple and recuperated gas turbine cycles in which steam is generated and used to increase turbine flow. Calculations showed significant improvements in cycle efficiency and specific power by adding steam. The calculations were made using component efficiencies and loss assumptions typical of stationary powerplants. These results are presented for a range of operating temperatures and pressures. Relative heat exchanger size and the water use rate are also examined.
Nonrational Processes in Ethical Decision Making
ERIC Educational Resources Information Center
Rogerson, Mark D.; Gottlieb, Michael C.; Handelsman, Mitchell M.; Knapp, Samuel; Younggren, Jeffrey
2011-01-01
Most current ethical decision-making models provide a logical and reasoned process for making ethical judgments, but these models are empirically unproven and rely upon assumptions of rational, conscious, and quasi-legal reasoning. Such models predominate despite the fact that many nonrational factors influence ethical thought and behavior,…
Considerations in the design of a communication network for an autonomously managed power system
NASA Technical Reports Server (NTRS)
Mckee, J. W.; Whitehead, Norma; Lollar, Louis
1989-01-01
The considerations involved in designing a communication network for an autonomously managed power system intended for use in space vehicles are examined. An overview of the design and implementation of a communication network implemented in a breadboard power system is presented. An assumption that the monitoring and control devices are distributed but physically close leads to the selection of a multidrop cable communication system. The assumption of a high-quality communication cable in which few messages are lost resulted in a simple recovery procedure consisting of a time out and retransmit process.
Project M: An Assessment of Mission Assumptions
NASA Technical Reports Server (NTRS)
Edwards, Alycia
2010-01-01
Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Santillán, Moisés
2003-07-21
A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.
Russ, Stefanie
2014-08-01
It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001); Sens. Actuators B 72, 239 (2001).], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.
Heterosexual assumptions in verbal and non-verbal communication in nursing.
Röndahl, Gerd; Innala, Sune; Carlsson, Marianne
2006-11-01
This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they communicate through their language and behaviour.
Developing and Teaching Ethical Decision Making Skills.
ERIC Educational Resources Information Center
Robinson, John
1991-01-01
Student leaders and campus activities professionals can use a variety of techniques to help college students develop skill in ethical decision making, including teaching about the decision-making process, guiding students through decisions with a series of questions, playing ethics games, exploring assumptions, and best of all, role modeling. (MSE)
NASA Astrophysics Data System (ADS)
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly favor one range of probabilistic projections over another, that the choice of results on which to base policy must necessarily involve ethical considerations, as they have inevitable consequences for the distribution of risk In particular, the choice to use a more "optimistic" PDF for climate sensitivity (or other components of the causal chain) leads to the allowance of higher emissions consistent with any specified goal for risk reduction, and thus leads to higher climate impacts, in exchange for lower mitigation costs.
NASA Astrophysics Data System (ADS)
Medlyn, B.; Jiang, M.; Zaehle, S.
2017-12-01
There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.; Wiseman, Howard M.
2012-10-01
The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.
Approximate stoichiometry for rich hydrocarbon mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beans, E.W.
1993-03-01
The stoichiometry of lean mixtures can readily and accurately be determined from the assumption that all the carbon oxidizes to carbon dioxide and all the hydrogen oxidizes to water. This assumption is valid up to an equivalence ratio ([sigma]) of 0.8 and can be used with little error up to [sigma] = 1. The composition of the products of a hydrocarbon burnt in air under the foregoing assumption can be obtained from simple carbon, hydrogen, oxygen and nitrogen balances. Given the composition, one can determine the energy released and/or the adiabatic flame temperature. For rich mixtures, the foregoing assumption, ofmore » course, is not valid. Hence, there is no easy way to determine the stoichiometry of the products of a rich mixture. The objective of this note is to present an equation' which will allow one to readily determine the composition of the products of rich hydrocarbon mixtures. The equation is based on equilibrium composition calculations and some assumptions regarding the characteristics of hydrocarbons. The equation gives approximate results. However, the results are sufficiently accurate for many situations. If more accuracy is wanted, one should use an equilibrium combustion program like the one by Gordon and McBride.« less
I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Mead, C.; Anbar, A. D.
2016-12-01
Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
ERIC Educational Resources Information Center
Mellone, Maria
2011-01-01
Assumptions about the construction and the transmission of knowledge and about the nature of mathematics always underlie any teaching practice, even if often unconsciously. I examine the conjecture that theoretical tools suitably chosen can help the teacher to make such assumptions explicit and to support the teacher's reflection on his/her…
Effects of rotational symmetry breaking in polymer-coated nanopores
NASA Astrophysics Data System (ADS)
Osmanović, D.; Kerr-Winter, M.; Eccleston, R. C.; Hoogenboom, B. W.; Ford, I. J.
2015-01-01
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetric case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.
Testing Modeling Assumptions in the West Africa Ebola Outbreak
NASA Astrophysics Data System (ADS)
Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre
2016-10-01
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
Effects of rotational symmetry breaking in polymer-coated nanopores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osmanović, D.; Hoogenboom, B. W.; Ford, I. J.
2015-01-21
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetricmore » case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.« less
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.
2018-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.
Modeling Sexual Selection in Túngara Frog and Rationality of Mate Choice.
Vargas Bernal, Esteban; Sanabria Malagon, Camilo
2017-12-01
The males of the species of frogs Engystomops pustulosus produce simple and complex calls to lure females, as a way of intersexual selection. Complex calls lead males to a greater reproductive success than what simple calls do. However, the complex calls are also more attractive to their main predator, the bat Trachops cirrhosus. Therefore, as M. Ryan suggests in (The túngara frog: a study in sexual selection and communication. University of Chicago Press, Chicago, 1985), the complexity of the calls lets the frogs keep a trade-off between reproductive success and predation. In this paper, we verify this trade-off from the perspective of game theory. We first model the proportion of simple calls as a symmetric game of two strategies. We also model the effect of adding a third strategy, males that keep quiet and intercept females, which would play a role of intrasexual selection. Under the assumption that the decision of the males takes into account this trade-off between reproductive success and predation, our model reproduces the observed behavior reported in the literature with minimal assumption on the parameters. From the model with three strategies, we verify that the quiet strategy could only coexists with the simple and complex strategies if the rate at which quiet males intercept females is high, which explains the rarity of the quiet strategy. We conclude that the reproductive strategy of the male frog E. pustulosus is rational.
Attention and choice: a review on eye movements in decision making.
Orquin, Jacob L; Mueller Loose, Simone
2013-09-01
This paper reviews studies on eye movements in decision making, and compares their observations to theoretical predictions concerning the role of attention in decision making. Four decision theories are examined: rational models, bounded rationality, evidence accumulation, and parallel constraint satisfaction models. Although most theories were confirmed with regard to certain predictions, none of the theories adequately accounted for the role of attention during decision making. Several observations emerged concerning the drivers and down-stream effects of attention on choice, suggesting that attention processes plays an active role in constructing decisions. So far, decision theories have largely ignored the constructive role of attention by assuming that it is entirely determined by heuristics, or that it consists of stochastic information sampling. The empirical observations reveal that these assumptions are implausible, and that more accurate assumptions could have been made based on prior attention and eye movement research. Future decision making research would benefit from greater integration with attention research. Copyright © 2013 Elsevier B.V. All rights reserved.
Some research perspectives in galloping phenomena: critical conditions and post-critical behavior
NASA Astrophysics Data System (ADS)
Piccardo, Giuseppe; Pagnini, Luisa Carlotta; Tubino, Federica
2015-01-01
This paper gives an overview of wind-induced galloping phenomena, describing its manifold features and the many advances that have taken place in this field. Starting from a quasi-steady model of aeroelastic forces exerted by the wind on a rigid cylinder with three degree-of-freedom, two translations and a rotation in the plane of the model cross section, the fluid-structure interaction forces are described in simple terms, yet suitable with complexity of mechanical systems, both in the linear and in the nonlinear field, thus allowing investigation of a wide range of structural typologies and their dynamic behavior. The paper is driven by some key concerns. A great effort is made in underlying strengths and weaknesses of the classic quasi-steady theory as well as of the simplistic assumptions that are introduced in order to investigate such complex phenomena through simple engineering models. A second aspect, which is crucial to the authors' approach, is to take into account and harmonize the engineering, physical and mathematical perspectives in an interdisciplinary way—something which does not happen often. The authors underline that the quasi-steady approach is an irreplaceable tool, tough approximate and simple, for performing engineering analyses; at the same time, the study of this phenomenon gives origin to numerous problems that make the application of high-level mathematical solutions particularly attractive. Finally, the paper discusses a wide range of features of the galloping theory and its practical use which deserve further attention and refinements, pointing to the great potential represented by new fields of application and advanced analysis tools.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
The wisdom of deliberate mistakes.
Schoemaker, Paul J H; Gunther, Robert E
2006-06-01
Before the breakup of the Bell System, U.S. telephone companies were permitted by law to ask for security deposits from a small percentage of subscribers. The companies used statistical models to decide which customers were most likely to pay their bills late and thus should be charged a deposit, but no one knew whether the models were right. So the Bell companies made a deliberate mistake. They asked for no deposit from nearly 100,000 new customers randomly selected from among those who were considered high risks. Surprisingly, quite a few paid their bills on time. As a result, the companies instituted a smarter screening strategy, which added millions to the Bell System's bottom line. Usually, individuals and organizations go to great lengths to avoid errors. Companies are designed for optimum performance rather than for learning, and mistakes are seen as defects. But as the Bell System example shows, making mistakes--correctly--is a powerful way to accelerate learning and increase competitiveness. If one of a company's fundamental assumptions is wrong, the firm can achieve success more quickly by deliberately making errors than by considering only data that support the assumption. Moreover, executives who apply a conventional, systematic approach to solving a pattern recognition problem are often slower to find a solution than those who test their assumptions by knowingly making mistakes. How do you distinguish between smart mistakes and dumb ones? The authors' consulting firm has developed, and currently uses, a five-step process for identifying constructive mistakes. In one test, the firm assumed that a mistake it was planning to make would cost a significant amount of money, but the opposite happened. By turning assumptions on their heads, the firm created more than dollar 1 million in new business.
The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.
Meindl, Peter; Johnson, Kate M; Graham, Jesse
2016-04-01
Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.
Multiple Grammars: Old Wine in Old Bottles
ERIC Educational Resources Information Center
Sorace, Antonella
2014-01-01
Amaral and Roeper (this issue; henceforth A&R) argue that all speakers -- regardless of whether monolingual or bilingual -- have multiple grammars in their mental language representations. They further claim that this simple assumption can explain many things: optionality in second language (L2) language behaviour, multilingualism, language…
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Before Inflation and after Black Holes
NASA Astrophysics Data System (ADS)
Stoltenberg, Henry
This dissertation covers work from three research projects relating to the physics before the start of inflation and information after the decay of a black hole. For the first project, we analyze the cosmological role of terminal vacua in the string theory landscape, and point out that existing work on this topic makes very strong assumptions about the properties of the terminal vacua. We explore the implications of relaxing these assumptions (by including "arrival" as well as "departure" terminals) and demonstrate that the results in earlier work are highly sensitive to their assumption of no arrival terminals. We use our discussion to make some general points about tuning and initial conditions in cosmology. The second project is a discussion of the black hole information problem. Under certain conditions the black hole information puzzle and the (related) arguments that firewalls are a typical feature of black holes can break down. We first review the arguments of Almheiri, Marolf, Polchinski and Sully (AMPS) favoring firewalls, focusing on entanglements in a simple toy model for a black hole and the Hawking radiation. By introducing a large and inaccessible system entangled with the black hole (representing perhaps a de Sitter stretched horizon or inaccessible part of a landscape) we show complementarity can be restored and firewalls can be avoided throughout the black hole's evolution. Under these conditions black holes do not have an "information problem". We point out flaws in some of our earlier arguments that such entanglement might be generically present in some cosmological scenarios, and call out certain ways our picture may still be realized. The third project also examines the firewall argument. A fundamental limitation on the behavior of quantum entanglement known as "monogamy" plays a key role in the AMPS argument. Our goal is to study and apply many-body entanglement theory to consider the entanglement among different parts of Hawking radiation and black holes. Using the multipartite entanglement measure called negativity, we identify an example which differs from the AMPS accounting of quantum entanglement and might eliminate the need for a firewall. Specifically, we constructed a toy model for black hole decay which has different entanglement behavior than that assumed by AMPS. We discuss the additional steps that would be needed to bring lessons from our toy model to our understanding of realistic black holes.
Is Seismically Determined Q an Intrinsic Material Property?
NASA Astrophysics Data System (ADS)
Langston, C. A.
2003-12-01
The seismic quality factor, Q, has a well-defined physical meaning as an intrinsic material property associated with a visco-elastic or a non-linear stress-strain constitutive relation for a material. Measurement of Q from seismic waves, however, involves interpreting seismic wave amplitude and phase as deviations from some ideal elastic wave propagation model. Thus, assumptions in the elastic wave propagation model become the basis for attributing anelastic properties to the earth continuum. Scientifically, the resulting Q model derived from seismic data is no more than a hypothesis that needs to be verified by other independent experiments concerning the continuum constitutive law and through careful examination of the truth of the assumptions in the wave propagation model. A case in point concerns the anelasticity of Mississippi embayment sediments in the central U.S. that has important implications for evaluation of earthquake strong ground motions. Previous body wave analyses using converted Sp phases have suggested that Qs is ~30 in the sediments based on simple ray theory assumptions. However, detailed modeling of 1D heterogeneity in the sediments shows that Qs cannot be resolved by the Sp data. An independent experiment concerning the amplitude decay of surface waves propagating in the sediments shows that Qs must be generally greater than 80 but is also subject to scattering attenuation. Apparent Q effects seen in direct P and S waves can also be produced by wave tunneling mechanisms in relatively simple 1D heterogeneity. Heterogeneity is a general geophysical attribute of the earth as shown by many high-resolution data sets and should be used as the first litmus test on assumptions made in seismic Q studies before a Q model can be interpreted as an intrinsic material property.
Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís
2015-07-15
Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation process challenges this longstanding assumption. In the present work, we use partial errors to better constrain sequential sampling models at the single-trial level. Copyright © 2015 the authors 0270-6474/15/3510371-15$15.00/0.
Score tests for independence in semiparametric competing risks models.
Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul
2009-12-01
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.
Confidence intervals for a difference between lognormal means in cluster randomization trials.
Poirier, Julia; Zou, G Y; Koval, John
2017-04-01
Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Calculation of thermomechanical fatigue life based on isothermal behavior
NASA Technical Reports Server (NTRS)
Halford, Gary R.; Saltsman, James F.
1987-01-01
The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain - time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.
Stylized facts in social networks: Community-based static modeling
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo
2018-06-01
The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.
NASA Technical Reports Server (NTRS)
Cassen, Pat
1991-01-01
Attempts to derive a theoretical framework for the interpretation of the meteoritic record have been frustrated by our incomplete understanding of the fundamental processes that controlled the evolution of the primitive solar nebula. Nevertheless, it is possible to develop qualitative models of the nebula that illuminate its dynamic character, as well as the roles of some key parameters. These models draw on the growing body of observational data on the properties of disks around young, solar-type stars, and are constructed by applying the results of known solutions of protostellar collapse problems; making simple assumptions about the radial variations of nebular variables; and imposing the integral constraints demanded by conservation of mass, angular momentum, and energy. The models so constructed are heuristic, rather than predictive; they are intended to help us think about the nebula in realistic ways, but they cannot provide a definitive description of conditions in the nebula.
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2014-01-01
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
Maintaining the Balance Between Manpower, Skill Levels, and PERSTEMPO
2006-01-01
requirement processes. Models and tools that integrate these dimensions would help crys- tallize issues, identify embedded assumptions , and surface...problems will change if the planning assumptions are incorrect or if the other systems are incapable of making the nec- essary adjustments. Static...Carrillo, Background and Theory Behind the Compensations, Accessions, and Personnel ( CAPM ) Model, Santa Monica, Calif.: RAND Corporation, MR-1667
ERIC Educational Resources Information Center
Ngai, Courtney; Sevian, Hannah; Talanquer, Vicente
2014-01-01
Given the diversity of materials in our surroundings, one should expect scientifically literate citizens to have a basic understanding of the core ideas and practices used to analyze chemical substances. In this article, we use the term 'chemical identity' to encapsulate the assumptions, knowledge, and practices upon which chemical…
Review of methods for handling confounding by cluster and informative cluster size in clustered data
Seaman, Shaun; Pavlou, Menelaos; Copas, Andrew
2014-01-01
Clustered data are common in medical research. Typically, one is interested in a regression model for the association between an outcome and covariates. Two complications that can arise when analysing clustered data are informative cluster size (ICS) and confounding by cluster (CBC). ICS and CBC mean that the outcome of a member given its covariates is associated with, respectively, the number of members in the cluster and the covariate values of other members in the cluster. Standard generalised linear mixed models for cluster-specific inference and standard generalised estimating equations for population-average inference assume, in general, the absence of ICS and CBC. Modifications of these approaches have been proposed to account for CBC or ICS. This article is a review of these methods. We express their assumptions in a common format, thus providing greater clarity about the assumptions that methods proposed for handling CBC make about ICS and vice versa, and about when different methods can be used in practice. We report relative efficiencies of methods where available, describe how methods are related, identify a previously unreported equivalence between two key methods, and propose some simple additional methods. Unnecessarily using a method that allows for ICS/CBC has an efficiency cost when ICS and CBC are absent. We review tools for identifying ICS/CBC. A strategy for analysis when CBC and ICS are suspected is demonstrated by examining the association between socio-economic deprivation and preterm neonatal death in Scotland. PMID:25087978
On the Lighthill relationship and sound generation from isotropic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Praskovsky, Alexander; Oncley, Steven
1994-01-01
In 1952, Lighthill developed a theory for determining the sound generated by a turbulent motion of a fluid. With some statistical assumptions, Proudman applied this theory to estimate the acoustic power of isotropic turbulence. Recently, Lighthill established a simple relationship that relates the fourth-order retarded time and space covariance of his stress tensor to the corresponding second-order covariance and the turbulent flatness factor, without making statistical assumptions for a homogeneous turbulence. Lilley revisited Proudman's work and applied the Lighthill relationship to evaluate directly the radiated acoustic power from isotropic turbulence. After choosing the time separation dependence in the two-point velocity time and space covariance based on the insights gained from direct numerical simulations, Lilley concluded that the Proudman constant is determined by the turbulent flatness factor and the second-order spatial velocity covariance. In order to estimate the Proudman constant at high Reynolds numbers, we analyzed a unique data set of measurements in a large wind tunnel and atmospheric surface layer that covers a range of the Taylor microscale based on Reynolds numbers 2.0 x 10(exp 3) less than or equal to R(sub lambda) less than or equal to 12.7 x 10(exp 3). Our measurements demonstrate that the Lighthill relationship is a good approximation, providing additional support to Lilley's approach. The flatness factor is found between 2.7 - 3.3 and the second order spatial velocity covariance is obtained. Based on these experimental data, the Proudman constant is estimated to be 0.68 - 3.68.
NASA Astrophysics Data System (ADS)
Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.
2012-02-01
The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.
Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation
Yu, Hongyi
2018-01-01
A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601
Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC
Templeton, Alan R.
2009-01-01
Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182
A Method to Constrain Genome-Scale Models with 13C Labeling Data
García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.
2015-01-01
Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153
Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.
Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi
2018-03-17
A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.
Simple wealth distribution model causing inequality-induced crisis without external shocks
NASA Astrophysics Data System (ADS)
Benisty, Henri
2017-05-01
We address the issue of the dynamics of wealth accumulation and economic crisis triggered by extreme inequality, attempting to stick to most possibly intrinsic assumptions. Our general framework is that of pure or modified multiplicative processes, basically geometric Brownian motions. In contrast with the usual approach of injecting into such stochastic agent models either specific, idiosyncratic internal nonlinear interaction patterns or macroscopic disruptive features, we propose a dynamic inequality model where the attainment of a sizable fraction of the total wealth by very few agents induces a crisis regime with strong intermittency, the explicit coupling between the richest and the rest being a mere normalization mechanism, hence with minimal extrinsic assumptions. The model thus harnesses the recognized lack of ergodicity of geometric Brownian motions. It also provides a statistical intuition to the consequences of Thomas Piketty's recent "r >g " (return rate > growth rate) paradigmatic analysis of very-long-term wealth trends. We suggest that the "water-divide" of wealth flow may define effective classes, making an objective entry point to calibrate the model. Consistently, we check that a tax mechanism associated to a few percent relative bias on elementary daily transactions is able to slow or stop the build-up of large wealth. When extreme fluctuations are tamed down to a stationary regime with sizable but steadier inequalities, it should still offer opportunities to study the dynamics of crisis and the inner effective classes induced through external or internal factors.
Microscopic Description of Le Chatelier's Principle
ERIC Educational Resources Information Center
Novak, Igor
2005-01-01
A simple approach that "demystifies" Le Chatelier's principle (LCP) and simulates students to think about fundamental physical background behind the well-known principles is presented. The approach uses microscopic descriptors of matter like energy levels and populations and does not require any assumption about the fixed amount of substance being…
Nonparametric Identification of Causal Effects under Temporal Dependence
ERIC Educational Resources Information Center
Dafoe, Allan
2018-01-01
Social scientists routinely address temporal dependence by adopting a simple technical fix. However, the correct identification strategy for a causal effect depends on causal assumptions. These need to be explicated and justified; almost no studies do so. This article addresses this shortcoming by offering a precise general statement of the…
On the Clausius Equality and Inequality
ERIC Educational Resources Information Center
Anacleto, Joaquim
2011-01-01
This paper deals with subtleties and misunderstandings regarding the Clausius relation. We start by demonstrating the relation in a new and simple way, explaining clearly the assumptions made and the extent of its validity. Then follows a detailed discussion of some confusions and mistakes often found in the literature. The addressed points…
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
A Comprehensive Real-World Distillation Experiment
ERIC Educational Resources Information Center
Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.
2015-01-01
Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…
Two (Very) Different Worlds: The Cultures of Policymaking and Qualitative Research
ERIC Educational Resources Information Center
Donmoyer, Robert
2012-01-01
This article brackets assumptions embedded in the framing of this special issue on "problematizing methodological simplicity in qualitative research" in a effort to understand why policymakers put pressure on all types of researchers, including those who use qualitative methods, to provide relatively simple, even somewhat mechanistic portrayals of…
ERIC Educational Resources Information Center
Fairweather, John R.; Hunt, Lesley M.; Rosin, Chris J.; Campbell, Hugh R.
2009-01-01
Within the political economy of agriculture and agrofood literatures there are examples of approaches that reject simple dichotomies between alternatives and the mainstream. In line with such approaches, we challenge the assumption that alternative agriculture, and its attendant improved environmental practices, alternative management styles, less…
Khader, Patrick H; Pachur, Thorsten; Meier, Stefanie; Bien, Siegfried; Jost, Kerstin; Rösler, Frank
2011-11-01
Many of our daily decisions are memory based, that is, the attribute information about the decision alternatives has to be recalled. Behavioral studies suggest that for such decisions we often use simple strategies (heuristics) that rely on controlled and limited information search. It is assumed that these heuristics simplify decision-making by activating long-term memory representations of only those attributes that are necessary for the decision. However, from behavioral studies alone, it is unclear whether using heuristics is indeed associated with limited memory search. The present study tested this assumption by monitoring the activation of specific long-term-memory representations with fMRI while participants made memory-based decisions using the "take-the-best" heuristic. For different decision trials, different numbers and types of information had to be retrieved and processed. The attributes consisted of visual information known to be represented in different parts of the posterior cortex. We found that the amount of information required for a decision was mirrored by a parametric activation of the dorsolateral PFC. Such a parametric pattern was also observed in all posterior areas, suggesting that activation was not limited to those attributes required for a decision. However, the posterior increases were systematically modulated by the relative importance of the information for making a decision. These findings suggest that memory-based decision-making is mediated by the dorsolateral PFC, which selectively controls posterior storage areas. In addition, the systematic modulations of the posterior activations indicate a selective boosting of activation of decision-relevant attributes.
Stress Analysis of Beams with Shear Deformation of the Flanges
NASA Technical Reports Server (NTRS)
Kuhn, Paul
1937-01-01
This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.
Ryu, Ehri; Cheong, Jeewon
2017-01-01
In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248
Ozone chemical equilibrium in the extended mesopause under the nighttime conditions
NASA Astrophysics Data System (ADS)
Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.
2018-01-01
For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Evolution of eumetazoan nervous systems: insights from cnidarians.
Kelava, Iva; Rentzsch, Fabian; Technau, Ulrich
2015-12-19
Cnidarians, the sister group to bilaterians, have a simple diffuse nervous system. This morphological simplicity and their phylogenetic position make them a crucial group in the study of the evolution of the nervous system. The development of their nervous systems is of particular interest, as by uncovering the genetic programme that underlies it, and comparing it with the bilaterian developmental programme, it is possible to make assumptions about the genes and processes involved in the development of ancestral nervous systems. Recent advances in sequencing methods, genetic interference techniques and transgenic technology have enabled us to get a first glimpse into the molecular network underlying the development of a cnidarian nervous system-in particular the nervous system of the anthozoan Nematostella vectensis. It appears that much of the genetic network of the nervous system development is partly conserved between cnidarians and bilaterians, with Wnt and bone morphogenetic protein (BMP) signalling, and Sox genes playing a crucial part in the differentiation of neurons. However, cnidarians possess some specific characteristics, and further studies are necessary to elucidate the full regulatory network. The work on cnidarian neurogenesis further accentuates the need to study non-model organisms in order to gain insights into processes that shaped present-day lineages during the course of evolution. © 2015 The Authors.
End-of-life decision making is more than rational.
Eliott, Jaklin A; Olver, Ian N
2005-01-01
Most medical models of end-of-life decision making by patients assume a rational autonomous adult obtaining and deliberating over information to arrive at some conclusion. If the patient is deemed incapable of this, family members are often nominated as substitutes, with assumptions that the family are united and rational. These are problematic assumptions. We interviewed 23 outpatients with cancer about the decision not to resuscitate a patient following cardiopulmonary arrest and examined their accounts of decision making using discourse analytical techniques. Our analysis suggests that participants access two different interpretative repertoires regarding the construct of persons, invoking a 'modernist' repertoire to assert the appropriateness of someone, a patient or family, making a decision, and a 'romanticist' repertoire when identifying either a patient or family as ineligible to make the decision. In determining the appropriateness of an individual to make decisions, participants informally apply 'Sanity' and 'Stability' tests, assessing both an inherent ability to reason (modernist repertoire) and the presence of emotion (romanticist repertoire) which might impact on the decision making process. Failure to pass the tests respectively excludes or excuses individuals from decision making. The absence of the romanticist repertoire in dominant models of patient decision making has ethical implications for policy makers and medical practitioners dealing with dying patients and their families.
How to make a particular case for person-centred patient care: A commentary on Alexandra Parvan.
Graham, George
2018-06-14
In recent years, a person-centred approach to patient care in cases of mental illness has been promoted as an alternative to a disease orientated approach. Alexandra Parvan's contribution to the person-centred approach serves to motivate an exploration of the approach's most apt metaphysical assumptions. I argue that a metaphysical thesis or assumption about both persons and their uniqueness is an essential element of being person-centred. I apply the assumption to issues such as the disorder/disease distinction and to the continuity of mental health and illness. © 2018 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Weinrich, M. L.; Talanquer, V.
2015-01-01
The central goal of this qualitative research study was to uncover major implicit assumptions that students with different levels of training in the discipline apply when thinking and making decisions about chemical reactions used to make a desired product. In particular, we elicited different ways of conceptualizing why chemical reactions happen…
Renormalized Two-Fluid Hydrodynamics of Cosmic-Ray--modified Shocks
NASA Astrophysics Data System (ADS)
Malkov, M. A.; Voelk, H. J.
1996-12-01
A simple two-fluid model of diffusive shock acceleration, introduced by Axford, Leer, & Skadron and Drury & Völk, is revisited. This theory became a chief instrument in the studies of shock modification due to particle acceleration. Unfortunately its most intriguing steady state prediction about a significant enhancement of the shock compression and a corresponding increase of the cosmic-ray production violates assumptions which are critical for the derivation of this theory. In particular, for strong shocks the spectral flattening makes a cutoff-independent definition of pressure and energy density impossible and therefore causes an additional closure problem. Confining ourselves for simplicity to the case of plane shocks, assuming reacceleration of a preexisting cosmic-ray population, we argue that also under these circumstances the kinetic solution has a rather simple form. It can be characterized by only a few parameters, in the simplest case by the slope and the magnitude of the momentum distribution at the upper momentum cutoff. We relate these parameters to standard hydrodynamic quantities like the overall shock compression ratio and the downstream cosmic-ray pressure. The two-fluid theory produced in this way has the traditional form but renormalized closure parameters. By solving the renormalized Rankine-Hugoniot equations, we show that for the efficient stationary solution, most significant for cosmic-ray acceleration, the renormalization is needed in the whole parameter range of astrophysical interest.
The effect of stimulus strength on the speed and accuracy of a perceptual decision.
Palmer, John; Huk, Alexander C; Shadlen, Michael N
2005-05-02
Both the speed and the accuracy of a perceptual judgment depend on the strength of the sensory stimulation. When stimulus strength is high, accuracy is high and response time is fast; when stimulus strength is low, accuracy is low and response time is slow. Although the psychometric function is well established as a tool for analyzing the relationship between accuracy and stimulus strength, the corresponding chronometric function for the relationship between response time and stimulus strength has not received as much consideration. In this article, we describe a theory of perceptual decision making based on a diffusion model. In it, a decision is based on the additive accumulation of sensory evidence over time to a bound. Combined with simple scaling assumptions, the proportional-rate and power-rate diffusion models predict simple analytic expressions for both the chronometric and psychometric functions. In a series of psychophysical experiments, we show that this theory accounts for response time and accuracy as a function of both stimulus strength and speed-accuracy instructions. In particular, the results demonstrate a close coupling between response time and accuracy. The theory is also shown to subsume the predictions of Piéron's Law, a power function dependence of response time on stimulus strength. The theory's analytic chronometric function allows one to extend theories of accuracy to response time.
Detection of Extraterrestrial Ecology (Exoecology)
NASA Technical Reports Server (NTRS)
Jones, Harry; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
Researchers in the Astrobiology Technology Branch at Ames Research Center have begun investigating alternate concepts for the detection of extraterrestrial life. We suggest searching for extraterrestrial ecology, exoecology, as well as for extraterrestrial biology, exobiology. Ecology describes the interactions of living things with their environment. All ecosystems are highly constrained by their environment and are constrained by well-known system design principles. Ecology could exist wherever there is an energy source and living I things have discovered some means to capture, store, and use the available energy. Terrestrial ecosystems use as energy sources, light, organic molecules, and in thermal vents and elsewhere, simple inorganic molecules. Ecosystem behavior is controlled by matter and energy conservation laws and can be described by linear and nonlinear dynamic systems theory. Typically in an ecosystem different molecules are not in chemical equilibrium and scarce material is conserved, stored, or recycled. Temporal cycles and spatial variations are often observed. These and other -eneral principles of exoecology can help guide the search for extraterrestrial life. The chemical structure observed in terrestrial biology may be highly contingent on evolutionary accidents. Oxygen was not always abundant on Earth. Primitive sulfur bacteria use hydrogen sulfide and sulfur to perform photosynthesis instead of water and oxygen. Astrobiologists have assumed, for the sake of narrowing and focusing our life detection strategies, that extraterrestrial life will have detailed chemical similarities with terrestrial life. Such assumptions appear very reasonable and they allow us to design specific and highly sensitive life detection experiments. But the fewer assumptions we make, the less chance we have of being entirely wrong The best strategy for the detection of extraterrestrial life could be a mixed strategy. We should use detailed assumptions based on terrestrial biology to guide some but not all future searches for alien life. The systems principles of exoecology seem much more fundamental and inescapable than the terrestrial biology analogies of exobiology. We should search for exoecology as well as exobiology.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Drichoutis, Andreas C.; Lusk, Jayson L.
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample. PMID:25029467
Drichoutis, Andreas C; Lusk, Jayson L
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.
NASA Astrophysics Data System (ADS)
Wallace, Maria F. G.
2018-03-01
Over the years neoliberal ideology and discourse have become intricately connected to making science people. Science educators work within a complicated paradox where they are obligated to meet neoliberal demands that reinscribe dominant, hegemonic assumptions for producing a scientific workforce. Whether it is the discourse of school science, processes of being a scientist, or definitions of science particular subjects are made intelligible as others are made unintelligible. This paper resides within the messy entanglements of feminist poststructural and new materialist perspectives to provoke spaces where science educators might enact ethicopolitical hesitations. By turning to and living in theory, the un/making of certain kinds of science people reveals material effects and affects. Practicing ethicopolitical hesitations prompt science educators to consider beginning their work from ontological assumptions that begin with abundance rather than lack.
Bell violation using entangled photons without the fair-sampling assumption.
Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton
2013-05-09
The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.
ERIC Educational Resources Information Center
Finley-Brook, Mary; Zanella-Litke, Megan; Ragan, Kyle; Coleman, Breana
2012-01-01
Colleges across the country are hosting on-campus renewable energy projects. The general assumption is that trade schools, community colleges, or technology-oriented universities with large engineering departments make the most appropriate sites for training future leaders in renewable energy innovation. While it makes sense to take advantage of…
Static Analysis Alert Audits: Lexicon and Rules
2016-11-04
collaborators • Includes a standard set of well-defined determinations for static analysis alerts • Includes a set of auditing rules to help auditors make...consistent decisions in commonly-encountered situations Different auditors should make the same determination for a given alert! Improve the quality and...scenarios • Establish assumptions auditors can make • Overall: help make audit determinations more consistent We developed 12 rules • Drew on our own
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Fostering deliberations about health innovation: what do we want to know from publics?
Lehoux, Pascale; Daudelin, Genevieve; Demers-Payette, Olivier; Boivin, Antoine
2009-06-01
As more complex and uncertain forms of health innovation keep emerging, scholars are increasingly voicing arguments in favour of public involvement in health innovation policy. The current conceptualization of this involvement is, however, somewhat problematic as it tends to assume that scientific facts form a "hard," indisputable core around which "soft," relative values can be attached. This paper, by giving precedence to epistemological issues, explores what there is to know from public involvement. We argue that knowledge and normative assumptions are co-constitutive of each other and pivotal to the ways in which both experts and non-experts reason about health innovations. Because knowledge and normative assumptions are different but interrelated ways of reasoning, public involvement initiatives need to emphasise deliberative processes that maximise mutual learning within and across various groups of both experts and non-experts (who, we argue, all belong to the "publics"). Hence, we believe that what researchers might wish to know from publics is how their reasoning is anchored in normative assumptions (what makes a given innovation desirable?) and in knowledge about the plausibility of their effects (are they likely to be realised?). Accordingly, one sensible goal of greater public involvement in health innovation policy would be to refine normative assumptions and make their articulation with scientific observations explicit and openly contestable. The paper concludes that we must differentiate between normative assumptions and knowledge, rather than set up a dichotomy between them or confound them.
ERIC Educational Resources Information Center
Eanes, Francis R.
2016-01-01
How can researchers and practitioners meaningfully engage the public in matters of environmental stewardship and landscape conservation? Traditional approaches to answering this question have erroneously relied upon the assumption that the simple combination of knowledge and awareness of environmental challenges will motivate desirable behavior…
Poisson sampling - The adjusted and unadjusted estimator revisited
Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas
1998-01-01
The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...
Improving Foundational Number Representations through Simple Arithmetical Training
ERIC Educational Resources Information Center
Kallai, Arava Y.; Schunn, Christian D.; Ponting, Andrea L.; Fiez, Julie A.
2011-01-01
The aim of this study was to test a training program intended to fine-tune the mental representations of double-digit numbers, thus increasing the discriminability of such numbers. The authors' assumption was that increased fluency in math could be achieved by improving the analogic representations of numbers. The study was completed in the…
Response: Training Doctoral Students to Be Scientists
ERIC Educational Resources Information Center
Pollio, David E.
2012-01-01
The purpose of this article is to begin framing doctoral training for a science of social work. This process starts by examining two seemingly simple questions: "What is a social work scientist?" and "How do we train social work scientists?" In answering the first question, some basic assumptions and concepts about what constitutes a "social work…
A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)
ERIC Educational Resources Information Center
Arenson, Ethan A.; Karabatsos, George
2017-01-01
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Distinct Orbitofrontal Regions Encode Stimulus and Choice Valuation
ERIC Educational Resources Information Center
Cunningham, William A.; Kesek, Amanda; Mowrer, Samantha M.
2009-01-01
The weak axiom of revealed preferences suggests that the value of an object can be understood through the simple examination of choices. Although this axiom has driven economic theory, the assumption of equation between value and choice is often violated. fMRI was used to decouple the processes associated with evaluating stimuli from evaluating…
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Language, Cognition, and the Right Hemisphere: A Response to Gazzaniga.
ERIC Educational Resources Information Center
Levy, Jerre
1983-01-01
Disputes several assumptions made by Gazzaniga in the preceding article, namely: (l) that any capacity to extract meaning from spoken or written words indicates linguistic competence; and (2) that the right hemisphere is passive and nonresponsive and that the limits of its cognitive abilities are manifested in simple matching-to-sample tasks. (GC)
NASA Astrophysics Data System (ADS)
Russ, Stefanie
2014-08-01
It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001), 10.1016/S0925-4005(01)00843-7; Sens. Actuators B 72, 239 (2001)., 10.1016/S0925-4005(00)00676-6], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.
A new paradigm for clinical communication: critical review of literature in cancer care.
Salmon, Peter; Young, Bridget
2017-03-01
To: (i) identify key assumptions of the scientific 'paradigm' that shapes clinical communication research and education in cancer care; (ii) show that, as general rules, these do not match patients' own priorities for communication; and (iii) suggest how the paradigm might change to reflect evidence better and thereby serve patients better. A critical review, focusing on cancer care. We identified assumptions about patients' and clinicians' roles in recent position and policy statements. We examined these in light of research evidence, focusing on inductive research that has not itself been constrained by those assumptions, and considering the institutionalised interests that the assumptions might serve. The current paradigm constructs patients simultaneously as needy (requiring clinicians' explicit emotional support) and robust (seeking information and autonomy in decision making). Evidence indicates, however, that patients generally value clinicians who emphasise expert clinical care rather than counselling, and who lead decision making. In denoting communication as a technical skill, the paradigm constructs clinicians as technicians; however, communication cannot be reduced to technical skills, and teaching clinicians 'communication skills' has not clearly benefited patients. The current paradigm is therefore defined by assumptions that that have not arisen from evidence. A paradigm for clinical communication that makes its starting point the roles that mortal illness gives patients and clinicians would emphasise patients' vulnerability and clinicians' goal-directed expertise. Attachment theory provides a knowledge base to inform both research and education. Researchers will need to be alert to political interests that seek to mould patients into 'consumers', and to professional interests that seek to add explicit psychological dimensions to clinicians' roles. New approaches to education will be needed to support clinicians' curiosity and goal-directed judgement in applying this knowledge. The test for the new paradigm will be whether the research and education it promotes benefit patients. © 2016 The Authors. Medical Education published by Association for the Study of Medical Education and John Wiley & Sons Ltd.
Cosmic Star Formation: A Simple Model of the SFRD(z)
NASA Astrophysics Data System (ADS)
Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria
2017-12-01
We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).
A simple model for the cloud adjacency effect and the apparent bluing of aerosols near clouds
NASA Astrophysics Data System (ADS)
Marshak, Alexander; Wen, Guoyong; Coakley, James A.; Remer, Lorraine A.; Loeb, Norman G.; Cahalan, Robert F.
2008-07-01
In determining aerosol-cloud interactions, the properties of aerosols must be characterized in the vicinity of clouds. Numerous studies based on satellite observations have reported that aerosol optical depths increase with increasing cloud cover. Part of the increase comes from the humidification and consequent growth of aerosol particles in the moist cloud environment, but part comes from 3-D cloud-radiative transfer effects on the retrieved aerosol properties. Often, discerning whether the observed increases in aerosol optical depths are artifacts or real proves difficult. The paper only addresses the cloud-clear sky radiative transfer interaction part. It provides a simple model that quantifies the enhanced illumination of cloud-free columns in the vicinity of clouds that are used in the aerosol retrievals. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from enhanced Rayleigh scattering that results from the presence of the nearby clouds. This assumption leads to a larger increase of AOT for shorter wavelengths, or to a "bluing" of aerosols near clouds. The assumption that contribution from molecular scattering dominates over aerosol scattering and surface reflection is justified for the case of shorter wavelengths, dark surfaces, and an aerosol layer below the cloud tops. The enhancement in Rayleigh scattering is estimated using a stochastic cloud model to obtain the radiative flux reflected by broken clouds and comparing this flux with that obtained with the molecules in the atmosphere causing extinction, but no scattering.
Précis of Simple heuristics that make us smart.
Todd, P M; Gigerenzer, G
2000-10-01
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.
You're Doing "What" This Summer? Making the Most of International Professional Development
ERIC Educational Resources Information Center
Patterson, Timothy
2014-01-01
The content of social studies curricula make studying abroad during the summer months a win-win for social studies teachers. During these experiences, teachers have the opportunity to develop their knowledge of global history and other cultures and to see a bit of the world. That said, the most dangerous assumption one can make is that simply…
2013-05-23
is called worldview. It determines how individuals interpret everything. In his book, Toward a Theory of Cultural Linguistics, Gary Palmer explains...person to person and organization to organization. Although analytical frameworks provide a common starting 2Gary B. Palmer, Toward A Theory of Cultural...this point, when overwhelmed, that planners reach out to theory and make determinations based on implicit assumptions and unconscious cognitive biases
Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.
2009-01-01
The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741
Pullenayegum, Eleanor M; Lim, Lily Sh
2016-12-01
When data are collected longitudinally, measurement times often vary among patients. This is of particular concern in clinic-based studies, for example retrospective chart reviews. Here, typically no two patients will share the same set of measurement times and moreover, it is likely that the timing of the measurements is associated with disease course; for example, patients may visit more often when unwell. While there are statistical methods that can help overcome the resulting bias, these make assumptions about the nature of the dependence between visit times and outcome processes, and the assumptions differ across methods. The purpose of this paper is to review the methods available with a particular focus on how the assumptions made line up with visit processes encountered in practice. Through this we show that no one method can handle all plausible visit scenarios and suggest that careful analysis of the visit process should inform the choice of analytic method for the outcomes. Moreover, there are some commonly encountered visit scenarios that are not handled well by any method, and we make recommendations with regard to study design that would minimize the chances of these problematic visit scenarios arising. © The Author(s) 2014.
Mellers, B A; Schwartz, A; Cooke, A D
1998-01-01
For many decades, research in judgment and decision making has examined behavioral violations of rational choice theory. In that framework, rationality is expressed as a single correct decision shared by experimenters and subjects that satisfies internal coherence within a set of preferences and beliefs. Outside of psychology, social scientists are now debating the need to modify rational choice theory with behavioral assumptions. Within psychology, researchers are debating assumptions about errors for many different definitions of rationality. Alternative frameworks are being proposed. These frameworks view decisions as more reasonable and adaptive that previously thought. For example, "rule following." Rule following, which occurs when a rule or norm is applied to a situation, often minimizes effort and provides satisfying solutions that are "good enough," though not necessarily the best. When rules are ambiguous, people look for reasons to guide their decisions. They may also let their emotions take charge. This chapter presents recent research on judgment and decision making from traditional and alternative frameworks.
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Convergence to Diagonal Form of Block Jacobi-type Processes
NASA Astrophysics Data System (ADS)
Hari, Vjeran
2008-09-01
The main result of recent research on convergence to diagonal form of block Jacobi-type processes is presented. For this purpose, all notions needed to describe the result are introduced. In particular, elementary block transformation matrices, simple and non-simple algorithms, block pivot strategies together with the appropriate equivalence relations are defined. The general block Jacobi-type process considered here can be specialized to take the form of almost any known Jacobi-type method for solving the ordinary or the generalized matrix eigenvalue and singular value problems. The assumptions used in the result are satisfied by many concrete methods.
Simple Spectral Lines Data Model Version 1.0
NASA Astrophysics Data System (ADS)
Osuna, Pedro; Salgado, Jesus; Guainazzi, Matteo; Dubernet, Marie-Lise; Roueff, Evelyne; Osuna, Pedro; Salgado, Jesus
2010-12-01
This document presents a Data Model to describe Spectral Line Transitions in the context of the Simple Line Access Protocol defined by the IVOA (c.f. Ref[13] IVOA Simple Line Access protocol) The main objective of the model is to integrate with and support the Simple Line Access Protocol, with which it forms a compact unit. This integration allows seamless access to Spectral Line Transitions available worldwide in the VO context. This model does not provide a complete description of Atomic and Molecular Physics, which scope is outside of this document. In the astrophysical sense, a line is considered as the result of a transition between two energy levels. Under the basis of this assumption, a whole set of objects and attributes have been derived to define properly the necessary information to describe lines appearing in astrophysical contexts. The document has been written taking into account available information from many different Line data providers (see acknowledgments section).
Statistical foundations of liquid-crystal theory
Seguin, Brian; Fried, Eliot
2013-01-01
We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals. PMID:23772091
Reconciling projections of the Antarctic contribution to sea level rise
NASA Astrophysics Data System (ADS)
Edwards, Tamsin; Holden, Philip; Edwards, Neil; Wernecke, Andreas
2017-04-01
Two recent studies of the Antarctic contribution to sea level rise this century had best estimates that differed by an order of magnitude (around 10 cm and 1 m by 2100). The first, Ritz et al. (2015), used a model calibrated with satellite data, giving a 5% probability of exceeding 30cm by 2100 for sea level rise due to Antarctic instability. The second, DeConto and Pollard (2016), used a model evaluated with reconstructions of palaeo-sea level. They did not estimate probabilities, but using a simple assumption here about the distribution shape gives up to a 5% chance of Antarctic contribution exceeding 2.3 m this century with total sea level rise approaching 3 m. If robust, this would have very substantial implications for global adaptation to climate change. How are we to make sense of this apparent inconsistency? How much is down to the data - does the past tell us we will face widespread and rapid Antarctic ice losses in the future? How much is due to the mechanism of rapid ice loss ('cliff failure') proposed in the latter paper, or other parameterisation choices in these low resolution models (GRISLI and PISM, respectively)? How much is due to choices made in the ensemble design and calibration? How do these projections compare with high resolution, grounding line resolving models such as BISICLES? Could we reduce the huge uncertainties in the palaeo-study? Emulation provides a powerful tool for understanding these questions and reconciling the projections. By describing the three numerical ice sheet models with statistical models, we can re-analyse the ensembles and re-do the calibrations under a common statistical framework. This reduces uncertainty in the PISM study because it allows massive sampling of the parameter space, which reduces the sensitivity to reconstructed palaeo-sea level values and also narrows the probability intervals because the simple assumption about distribution shape above is no longer needed. We present reconciled probabilistic projections for the Antarctic contribution to sea level rise from GRISLI, PISM and BISICLES this century, giving results that are meaningful and interpretable by decision-makers.
Learning in Equity-Oriented Scale-Making Projects
ERIC Educational Resources Information Center
Jurow, A. Susan; Shea, Molly
2015-01-01
This article examines how new forms of learning and expertise are made to become consequential in changing communities of practice. We build on notions of scale making to understand how particular relations between practices, technologies, and people become meaningful across spatial and temporal trajectories of social action. A key assumption of…
Toward an Understanding of Teachers' Desire for Participation in Decision Making.
ERIC Educational Resources Information Center
Taylor, Dianne L.; Tashakkori, Abbas
1997-01-01
Explores the assumption that teachers want to participate in schoolwide decision making by constructing a typology of teachers. Characterizes four types of teachers: empowered, disenfranchised, involved (those that do not want to participate, but do), and disengaged. Analysis of teachers' differences and similarities on demographic and attitudinal…
Robust Decision Making in a Nonlinear World
ERIC Educational Resources Information Center
Dougherty, Michael R.; Thomas, Rick P.
2012-01-01
The authors propose a general modeling framework called the general monotone model (GeMM), which allows one to model psychological phenomena that manifest as nonlinear relations in behavior data without the need for making (overly) precise assumptions about functional form. Using both simulated and real data, the authors illustrate that GeMM…
Cooking and Staff Development: A Blend of Training and Experience.
ERIC Educational Resources Information Center
Koll, Patricia; Anderson, Jim
1982-01-01
The making of a staff developer combines deliberate, systematic training and an accumulation of knowledge, skills, and assumptions based on experience. Staff developers must understand school practices and adult learning theory, shared decision-making and organization of support, and be flexible, creative, and committed to their work. (PP)
Information Input and Performance in Small Decision Making Groups.
ERIC Educational Resources Information Center
Ryland, Edwin Holman
It was hypothesized that increases in the amount and specificity of information furnished to a discussion group would facilitate group decision making and improve other aspects of group and individual performance. Procedures in testing these assumptions included varying the amounts of statistics, examples, testimony, and augmented information…
Strategies Making Language Features Noticeable in English Language Teaching
ERIC Educational Resources Information Center
Seong, Myeong-Hee
2009-01-01
The purpose of this study is to suggest effective strategies for the development of communicative ability in ELT (English Language Teaching) by investigating learners' perceptions on strategies making language features more noticeable. The assumption in the study is based on the idea of output-oriented focus on form instruction, supporting…
Collective Decision Making in Organizations.
ERIC Educational Resources Information Center
Svenning, Lynne L.
Based on the assumption that educators can adopt new patterns of organization and management to improve the quality of decision and change in education, this paper attempts to make decision theory and small group process theory relevant to practical decision situations confronting educational managers. Included are (1) a discussion of the…
A Unified Framework for Monetary Theory and Policy Analysis.
ERIC Educational Resources Information Center
Lagos, Ricardo; Wright, Randall
2005-01-01
Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…
Impact of unseen assumptions on communication of atmospheric carbon mitigation options
NASA Astrophysics Data System (ADS)
Elliot, T. R.; Celia, M. A.; Court, B.
2010-12-01
With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.
Clausius-Clapeyron Equation and Saturation Vapour Pressure: Simple Theory Reconciled with Practice
ERIC Educational Resources Information Center
Koutsoyiannis, Demetris
2012-01-01
While the Clausius-Clapeyron equation is very important as it determines the saturation vapour pressure, in practice it is replaced by empirical, typically Magnus-type, equations which are more accurate. It is shown that the reduced accuracy reflects an inconsistent assumption that the latent heat of vaporization is constant. Not only is this…
ERIC Educational Resources Information Center
Trujillo, Caleb; Cooper, Melanie M.; Klymkowsky, Michael W.
2012-01-01
Biological systems, from the molecular to the ecological, involve dynamic interaction networks. To examine student thinking about networks we used graphical responses, since they are easier to evaluate for implied, but unarticulated assumptions. Senior college level molecular biology students were presented with simple molecular level scenarios;…
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
ERIC Educational Resources Information Center
Suppes, P.; And Others
From some simple and schematic assumptions about information processing, a stochastic differential equation is derived for the motion of a student through a computer-assisted elementary mathematics curriculum. The mathematics strands curriculum of the Institute for Mathematical Studies in the Social Sciences is used to test: (1) the theory and (2)…
ERIC Educational Resources Information Center
Zhang, Jinming
2004-01-01
It is common to assume during statistical analysis of a multiscale assessment that the assessment has simple structure or that it is composed of several unidimensional subtests. Under this assumption, both the unidimensional and multidimensional approaches can be used to estimate item parameters. This paper theoretically demonstrates that these…
Discourse-Based Word Anticipation during Language Processing: Prediction or Priming?
ERIC Educational Resources Information Center
Otten, Marte; Van Berkum, Jos J. A.
2008-01-01
Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words "on the fly" as a text…
Paul C. Van Deusen
2002-01-01
The annual inventory system was designed under the assumption that a fixed percentage of plots would be measured annually in each State. The initial plan was to assign plots to panels to provide systematic coverage of a State. One panel would be measured each year to allow for annual updates of each State using simple estimation procedures. The reality is that...
Boltzmann's "H"-Theorem and the Assumption of Molecular Chaos
ERIC Educational Resources Information Center
Boozer, A. D.
2011-01-01
We describe a simple dynamical model of a one-dimensional ideal gas and use computer simulations of the model to illustrate two fundamental results of kinetic theory: the Boltzmann transport equation and the Boltzmann "H"-theorem. Although the model is time-reversal invariant, both results predict that the behaviour of the gas is time-asymmetric.…
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…
The Role of First-Language Listening Comprehension in Second-Language Reading Comprehension
ERIC Educational Resources Information Center
Edele, Aileen; Stanat, Petra
2016-01-01
Although the simple view of reading and other theories suggest that listening comprehension is an important determinant of reading comprehension, previous research on linguistic transfer has mainly focused on the role of first language (L1) decoding skills in second language (L2) reading. The present study tested the assumption that listening…
Challenges in converting among log scaling methods.
Henry Spelter
2003-01-01
The traditional method of measuring log volume in North America is the board foot log scale, which uses simple assumptions about how much of a log's volume is recoverable. This underestimates the true recovery potential and leads to difficulties in comparing volumes measured with the traditional board foot system and those measured with the cubic scaling systems...
Use of Climate Information for Decision-Making and Impacts Research: State of Our Understanding
2016-03-01
SUMMARY Much of human society and its infrastructure has been designed and built on a key assumption: that future climate conditions at any given...experienced in the past. This assumption affects infrastructure design and maintenance, emergency response management, and long-term investment and planning...our scientific understanding of the climate system in a manner that incorporates user needs into the design of scientific experiments, and that
NASA Astrophysics Data System (ADS)
Zlotnik, V. A.; Tartakovsky, D. M.
2017-12-01
The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles
NASA Astrophysics Data System (ADS)
Klüser, Lars; Di Biagio, Claudia; Kleiber, Paul D.; Formenti, Paola; Grassian, Vicki H.
2016-07-01
Optical properties (extinction efficiency, single scattering albedo, asymmetry parameter and scattering phase function) of five different desert dust minerals have been calculated with an asymptotic approximation approach (AAA) for non-spherical particles. The AAA method combines Rayleigh-limit approximations with an asymptotic geometric optics solution in a simple and straightforward formulation. The simulated extinction spectra have been compared with classical Lorenz-Mie calculations as well as with laboratory measurements of dust extinction. This comparison has been done for single minerals and with bulk dust samples collected from desert environments. It is shown that the non-spherical asymptotic approximation improves the spectral extinction pattern, including position of the extinction peaks, compared to the Lorenz-Mie calculations for spherical particles. Squared correlation coefficients from the asymptotic approach range from 0.84 to 0.96 for the mineral components whereas the corresponding numbers for Lorenz-Mie simulations range from 0.54 to 0.85. Moreover the blue shift typically found in Lorenz-Mie results is not present in the AAA simulations. The comparison of spectra simulated with the AAA for different shape assumptions suggests that the differences mainly stem from the assumption of the particle shape and not from the formulation of the method itself. It has been shown that the choice of particle shape strongly impacts the quality of the simulations. Additionally, the comparison of simulated extinction spectra with bulk dust measurements indicates that within airborne dust the composition may be inhomogeneous over the range of dust particle sizes, making the calculation of reliable radiative properties of desert dust even more complex.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
O'Reilly, Andrew M.
2004-01-01
A relatively simple method is needed that provides estimates of transient ground-water recharge in deep water-table settings that can be incorporated into other hydrologic models. Deep water-table settings are areas where the water table is below the reach of plant roots and virtually all water that is not lost to surface runoff, evaporation at land surface, or evapotranspiration in the root zone eventually becomes ground-water recharge. Areas in central Florida with a deep water table generally are high recharge areas; consequently, simulation of recharge in these areas is of particular interest to water-resource managers. Yet the complexities of meteorological variations and unsaturated flow processes make it difficult to estimate short-term recharge rates, thereby confounding calibration and predictive use of transient hydrologic models. A simple water-balance/transfer-function (WBTF) model was developed for simulating transient ground-water recharge in deep water-table settings. The WBTF model represents a one-dimensional column from the top of the vegetative canopy to the water table and consists of two components: (1) a water-balance module that simulates the water storage capacity of the vegetative canopy and root zone; and (2) a transfer-function module that simulates the traveltime of water as it percolates from the bottom of the root zone to the water table. Data requirements include two time series for the period of interest?precipitation (or precipitation minus surface runoff, if surface runoff is not negligible) and evapotranspiration?and values for five parameters that represent water storage capacity or soil-drainage characteristics. A limiting assumption of the WBTF model is that the percolation of water below the root zone is a linear process. That is, percolating water is assumed to have the same traveltime characteristics, experiencing the same delay and attenuation, as it moves through the unsaturated zone. This assumption is more accurate if the moisture content, and consequently the unsaturated hydraulic conductivity, below the root zone does not vary substantially with time. Results of the WBTF model were compared to those of the U.S. Geological Survey variably saturated flow model, VS2DT, and to field-based estimates of recharge to demonstrate the applicability of the WBTF model for a range of conditions relevant to deep water-table settings in central Florida. The WBTF model reproduced independently obtained estimates of recharge reasonably well for different soil types and water-table depths.
NASA Astrophysics Data System (ADS)
Owen, Gareth; Quinn, Paul; O'Donnell, Greg
2014-05-01
This paper explains how flood management projects might be better informed in the future by using more observations and a novel impact modelling tool in a simple transparent framework. The understanding of how local scale impacts propagate downstream to impact on the downstream hydrograph is difficult to determine using traditional rainfall runoff and hydraulic routing methods. The traditional approach to modelling essentially comprises selecting a fixed model structure and then calibrating to an observational hydrograph, which make those model predictions highly uncertain. Here, a novel approach is used in which the structure of the runoff generation is not specified a priori and incorporates expert knowledge. Rather than using externally for calibration, the observed outlet hydrographs are used directly within the model. Essentially the approach involves the disaggregation of the outlet hydrograph by making assumptions about the spatial distribution of runoff generated. The channel network is parameterised through a comparison of the timing of observed hydrographs at a number of nested locations within the catchment. The user is then encouraged to use their expert knowledge to define how runoff is generated locally and what the likely impact of any local mitigation is. Therefore the user can specify any hydrological model or flow estimation method that captures their expertise. Equally, the user is encouraged to install as many instruments as they can afford to cover the catchment network. A Decision Support Matrix (DSM) is used to encapsulate knowledge of the runoff dynamics gained from simulation in a simple visual way and hence to convey the likely impacts that arise from a given flood management scenario. This tool has been designed primarily to inform and educate landowners, catchment managers and decision makers. The DSM outlines scenarios that are likely to increase or decrease runoff rates and allows the user to contemplate the implications and uncertainty of their decisions. The tool can also be used to map the likely changes in flood peak due to land use management options. An example case study will be shown for a 35km2 catchment in Northern England which is prone to flooding. The method encourages end users to instrument and quantify their own catchment network and to make informed, evidence based decisions appropriate to their own flooding problems.
Charles, Cathy; Gafni, Amiram; Whelan, Tim; O'Brien, Mary Ann
2006-11-01
In this paper we discuss the influence of culture on the process of treatment decision-making, and in particular, shared treatment decision-making in the physician-patient encounter. We explore two key issues: (1) the meaning of culture and the ways that it can affect treatment decision-making; (2) cultural issues and assumptions underlying the development and use of treatment decision aids. This is a conceptual paper. Based on our knowledge and reading of the key literature in the treatment decision-making field, we looked for written examples where cultural influences were taken into account when discussing the physician-patient encounter and when designing instruments (decision aids) to help patients participate in making decisions. Our assessment of the situation is that to date, and with some recent exceptions, research in the above areas has not been culturally sensitive. We suggest that more research attention should be focused on exploring potential cultural variations in the meaning of and preferences for shared decision-making as well as on the applicability across cultural groups of decision aids developed to facilitate patient participation in treatment decision-making with physicians. Both patients and physicians need to be aware of the cultural assumptions underlying the development and use of decision aids and assess their cultural sensitivity to the needs and preferences of patients in diverse cultural groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk
We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less
Understanding the LIGO GW150914 event
NASA Astrophysics Data System (ADS)
Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao
2016-08-01
We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.
Building Blocks of Psychology: on Remaking the Unkept Promises of Early Schools.
Gozli, Davood G; Deng, Wei Sophia
2018-03-01
The appeal and popularity of "building blocks", i.e., simple and dissociable elements of behavior and experience, persists in psychological research. We begin our assessment of this research strategy with an historical review of structuralism (as espoused by E. B. Titchener) and behaviorism (espoused by J. B. Watson and B. F. Skinner), two movements that held the assumption in their attempts to provide a systematic and unified discipline. We point out the ways in which the elementism of the two schools selected, framed, and excluded topics of study. After the historical review, we turn to contemporary literature and highlight the persistence of research into building blocks and the associated framing and exclusions in psychological research. The assumption that complex categories of human psychology can be understood in terms of their elementary components and simplest forms seems indefensible. In specific cases, therefore, reliance on the assumption requires justification. Finally, we review alternative strategies that bypass the commitment to building blocks.
Quantum Bit Commitment and the Reality of the Quantum State
NASA Astrophysics Data System (ADS)
Srikanth, R.
2018-01-01
Quantum bit commitment is insecure in the standard non-relativistic quantum cryptographic framework, essentially because Alice can exploit quantum steering to defer making her commitment. Two assumptions in this framework are that: (a) Alice knows the ensembles of evidence E corresponding to either commitment; and (b) system E is quantum rather than classical. Here, we show how relaxing assumption (a) or (b) can render her malicious steering operation indeterminable or inexistent, respectively. Finally, we present a secure protocol that relaxes both assumptions in a quantum teleportation setting. Without appeal to an ontological framework, we argue that the protocol's security entails the reality of the quantum state, provided retrocausality is excluded.
Perspective Making: Constructivism as a Meaning-Making Structure for Simulation Gaming
ERIC Educational Resources Information Center
Lainema, Timo
2009-01-01
Constructivism has recently gained popularity, although it is not a completely new learning paradigm. Much of the work within e-learning, for example, uses constructivism as a reference "discipline" (explicitly or implicitly). However, some of the work done within the simulation gaming (SG) community discusses what the basic assumptions and…
The Social Construction of Gender and Sexuality: Learning from Two Spirit Traditions
ERIC Educational Resources Information Center
Sheppard, Maia; Mayo, J. B., Jr.
2013-01-01
The authors encourage teachers to make use of existing, standard social studies curriculum to uncover and to make visible the normative assumptions that underlie American cultural beliefs about gender and sexuality. The article provides an overview of how some cultures within the various Native American nations conceptualize gender and sexuality…
ERIC Educational Resources Information Center
Liu, Shiang-Yao; Lin, Chuan-Shun; Tsai, Chin-Chung
2011-01-01
This study aims to test the nature of the assumption that there are relationships between scientific epistemological views (SEVs) and reasoning processes in socioscientific decision making. A mixed methodology that combines both qualitative and quantitative approaches of data collection and analysis was adopted not only to verify the assumption…
ERIC Educational Resources Information Center
Becht, Andrik I.; Nelemans, Stefanie A.; Branje, Susan J. T.; Vollebergh, Wilma A. M.; Koot, Hans M.; Meeus, Wim H. J.
2017-01-01
A central assumption of identity theory is that adolescents reconsider current identity commitments and explore identity alternatives before they make new commitments in various identity domains (Erikson, 1968; Marcia, 1966). Yet, little empirical evidence is available on how commitment and exploration dynamics of identity formation affect each…
Administration and Policy-Making in Education: The Contemporary Predicament.
ERIC Educational Resources Information Center
Housego, Ian E.
This paper is based on the assumption that the educational administrator is the mediator in policy development. The author sees the administrator as caught between two conflicting approaches to policy-making--one characterized as "rational" and the other as "political." In attempting to deal with this dilemma and with the dilemma of shrinking…
Teaching Perspectives of Pre-Service Physical Education Teachers: The Shanghai Experience
ERIC Educational Resources Information Center
Wang, Lijuan
2014-01-01
Background: In the physical education (PE) domain, teachers are given the freedom to make important educational decisions. Because of the common assumption that the decisions teachers make are based on a set of educational perspectives, a considerable number of studies have addressed the importance of studying the thinking and beliefs of PE…
Coding “What” and “When” in the Archer Fish Retina
Vasserman, Genadiy; Shamir, Maoz; Ben Simon, Avi; Segev, Ronen
2010-01-01
Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy. PMID:21079682
Atomistic modeling of interphases in spider silk fibers
NASA Astrophysics Data System (ADS)
Fossey, Stephen Andrew
The objective of this work is to create an atomistic model to account for the unusual physical properties of silk fibers. Silk fibers have exceptional mechanical toughness, which makes them of interest as high performance fibers. In order to explain the toughness, a model for the molecular structure based on simple geometric reasoning was formulated. The model consists of very small crystallites, on the order of 5 nm, connected by a noncrystalline interphase. The interphase is a region between the crystalline phase and the amorphous phase, which is defined by the geometry of the system. The interphase is modeled as a very thin (<5 nm) film of noncrystalline polymer constructed using a Monte Carlo, rotational isomeric states approach followed by simulated annealing in order to achieve equilibrium chain configurations and density. No additional assumptions are made about density, orientation, or packing. The mechanical properties of the interphase are calculated using the method of Theodoreau and Suter. Finally, observable properties such as wide angle X-ray scattering and methyl rotation rates are calculated and compared with experimental data available in the literature.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-03-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
Improved atomistic simulation of diffusion and sorption in metal oxides
NASA Astrophysics Data System (ADS)
Skouras, E. D.; Burganos, V. N.; Payatakes, A. C.
2001-01-01
Gas diffusion and sorption on the surface of metal oxides are investigated using atomistic simulations, that make use of two different force fields for the description of the intramolecular and intermolecular interactions. MD and MC computations are presented and estimates of the mean residence time, Henry's constant, and the heat of adsorption are provided for various common gases (CO, CO2, O2, CH4, Xe), and semiconducting substrates that hold promise for gas sensor applications (SnO2, BaTiO3). Comparison is made between the performance of a simple, first generation force field (Universal) and a more detailed, second generation field (COMPASS) under the same conditions and the same assumptions regarding the generation of the working configurations. It is found that the two force fields yield qualitatively similar results in all cases examined here. However, direct comparison with experimental data reveals that the accuracy of the COMPASS-based computations is not only higher than that of the first generation force field but exceeds even that of published specialized methods, based on ab initio computations.
Macmillan, N A; Creelman, C D
1996-06-01
Can accuracy and response bias in two-stimulus, two-response recognition or detection experiments be measured nonparametrically? Pollack and Norman (1964) answered this question affirmatively for sensitivity, Hodos (1970) for bias: Both proposed measures based on triangular areas in receiver-operating characteristic space. Their papers, and especially a paper by Grier (1971) that provided computing formulas for the measures, continue to be heavily cited in a wide range of content areas. In our sample of articles, most authors described triangle-based measures as making fewer assumptions than measures associated with detection theory. However, we show that statistics based on products or ratios of right triangle areas, including a recently proposed bias index and a not-yetproposed but apparently plausible sensitivity index, are consistent with a decision process based on logistic distributions. Even the Pollack and Norman measure, which is based on non-right triangles, is approximately logistic for low values of sensitivity. Simple geometric models for sensitivity and bias are not nonparametric, even if their implications are not acknowledged in the defining publications.
Characterization of structural response to hypersonic boundary-layer transition
Riley, Zachary B.; Deshmukh, Rohit; Miller, Brent A.; ...
2016-05-24
The inherent relationship between boundary-layer stability, aerodynamic heating, and surface conditions makes the potential for interaction between the structural response and boundary-layer transition an important and challenging area of study in high-speed flows. This paper phenomenologically explores this interaction using a fundamental two-dimensional aerothermoelastic model under the assumption of an aluminum panel with simple supports. Specifically, an existing model is extended to examine the impact of transition onset location, transition length, and transitional overshoot in heat flux and fluctuating pressure on the structural response of surface panels. Transitional flow conditions are found to yield significantly increased thermal gradients, and theymore » can result in higher maximum panel temperatures compared to turbulent flow. Results indicate that overshoot in heat flux and fluctuating pressure reduces the flutter onset time and increases the strain energy accumulated in the panel. Furthermore, overshoot occurring near the midchord can yield average temperatures and peak displacements exceeding those experienced by the panel subject to turbulent flow. Lastly, these results suggest that fully turbulent flow does not always conservatively predict the thermo-structural response of surface panels.« less
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-06-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
Gigerenzer, Gerd; Gaissmaier, Wolfgang
2011-01-01
As reflected in the amount of controversy, few areas in psychology have undergone such dramatic conceptual changes in the past decade as the emerging science of heuristics. Heuristics are efficient cognitive processes, conscious or unconscious, that ignore part of the information. Because using heuristics saves effort, the classical view has been that heuristic decisions imply greater errors than do "rational" decisions as defined by logic or statistical models. However, for many decisions, the assumptions of rational models are not met, and it is an empirical rather than an a priori issue how well cognitive heuristics function in an uncertain world. To answer both the descriptive question ("Which heuristics do people use in which situations?") and the prescriptive question ("When should people rely on a given heuristic rather than a complex strategy to make better judgments?"), formal models are indispensable. We review research that tests formal models of heuristic inference, including in business organizations, health care, and legal institutions. This research indicates that (a) individuals and organizations often rely on simple heuristics in an adaptive way, and (b) ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples. The big future challenge is to develop a systematic theory of the building blocks of heuristics as well as the core capacities and environmental structures these exploit.
EPR and Bell's theorem: A critical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, H.P.
1991-01-01
The argument of Einstein, Podolsky, and Rosen is reviewed with attention to logical structure and character of assumptions. Bohr's reply is discussed. Bell's contribution is formulated without use of hidden variables, and efforts to equate hidden variables to realism are critically examined. An alternative derivation of nonlocality that makes no use of hidden variables, microrealism, counterfactual definiteness, or any other assumption alien to orthodox quantum thinking is described in detail, with particular attention to the quartet or broken-square question.
Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate
NASA Astrophysics Data System (ADS)
Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah
2007-11-01
This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.
Handy, C
1980-01-01
It's hard to imagine what our industrial society would be like if, for instance, there were no factories. How would things get produced, how would business survive? But are we, in fact, an industrial society? Are factories going to be the prime production place for a society that is conserving energy and doesn't need to travel to work because the silicon chip makes it more efficient to work at home? Who knows what the impact of energy conservation and women in the work force will be on future organizations? One thing we can be sure of, this author writes, is that whatever tomorrow brings, today's assumptions probably cannot account for it. We are, he asserts, entering a period of discontinuous change where the assumptions we have been working with as a society and in organizations are no longer necessarily true. He discusses three assumptions he sees fading--what causes efficiency, what work is, and what value organizational hierarchy has--and then gives some clues as to what our new assumptions might be. Regardless of what our assumptions actually are, however, our organizations and society will require leaders willing to take enormous risks and try unproved ways to cope with them.
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Evolution of Requirements and Assumptions for Future Exploration Missions
NASA Technical Reports Server (NTRS)
Anderson, Molly; Sargusingh, Miriam; Perry, Jay
2017-01-01
NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.
Characterization of commercial magnetorheological fluids at high shear rate: influence of the gap
NASA Astrophysics Data System (ADS)
Golinelli, Nicola; Spaggiari, Andrea
2018-07-01
This paper reports the experimental tests on the behaviour of a commercial MR fluid at high shear rates and the effect of the gap. Three gaps were considered at multiple magnetic fields and shear rates. From an extended set of almost two hundred experimental flow curves, a set of parameters for the apparent viscosity are retrieved by using the Ostwald de Waele model for non-Newtonian fluids. It is possible to simplify the parameter correlation by making the following considerations: the consistency of the model depends only on the magnetic field, the flow index depends on the fluid type and the gap shows an important effect only at null or very low magnetic fields. This lead to a simple and useful model, especially in the design phase of a MR based product. During the off state, with no applied field, it is possible to use a standard viscous model. During the active state, with high magnetic field, a strong non-Newtonian nature becomes prevalent over the viscous one even at very high shear rate; the magnetic field dominates the apparent viscosity change, while the gap does not play any relevant role on the system behaviour. This simple assumption allows the designer to dimension the gap only considering the non-active state, as in standard viscous systems, and taking into account only the magnetic effect in the active state, where the gap does not change the proposed fluid model.
Van Beurden, Eric K; Kia, Annie M; Zask, Avigdor; Dietrich, Uta; Rose, Lauren
2013-03-01
Health promotion addresses issues from the simple (with well-known cause/effect links) to the highly complex (webs and loops of cause/effect with unpredictable, emergent properties). Yet there is no conceptual framework within its theory base to help identify approaches appropriate to the level of complexity. The default approach favours reductionism--the assumption that reducing a system to its parts will inform whole system behaviour. Such an approach can yield useful knowledge, yet is inadequate where issues have multiple interacting causes, such as social determinants of health. To address complex issues, there is a need for a conceptual framework that helps choose action that is appropriate to context. This paper presents the Cynefin Framework, informed by complexity science--the study of Complex Adaptive Systems (CAS). It introduces key CAS concepts and reviews the emergence and implications of 'complex' approaches within health promotion. It explains the framework and its use with examples from contemporary practice, and sets it within the context of related bodies of health promotion theory. The Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies and avoid the pitfalls of applying reductionist approaches to complex situations. The urgency to address critical issues such as climate change and the social determinants of health calls for us to engage with complexity science. The Cynefin Framework helps practitioners make the shift, and enables those already engaged in complex approaches to communicate the value and meaning of their work in a system that privileges reductionist approaches.
NASA Astrophysics Data System (ADS)
Kuil, L.; Evans, T.; McCord, P. F.; Salinas, J. L.; Blöschl, G.
2018-04-01
While it is known that farmers adopt different decision-making behaviors to cope with stresses, it remains challenging to capture this diversity in formal model frameworks that are used to advance theory and inform policy. Guided by cognitive theory and the theory of bounded rationality, this research develops a novel, socio-hydrological model framework that can explore how a farmer's perception of water availability impacts crop choice and water allocation. The model is informed by a rich empirical data set at the household level collected during 2013 in Kenya's Upper Ewaso Ng'iro basin that shows that the crop type cultivated is correlated with water availability. The model is able to simulate this pattern and shows that near-optimal or "satisficing" crop patterns can emerge also when farmers were to make use of simple decision rules and have diverse perceptions on water availability. By focusing on farmer decision making it also captures the rebound effect, i.e., as additional water becomes available through the improvement of crop efficiencies it will be reallocated on the farm instead of flowing downstream, as a farmer will adjust his (her) water allocation and crop pattern to the new water conditions. This study is valuable as it is consistent with the theory of bounded rationality, and thus offers an alternative, descriptive model in addition to normative models. The framework can be used to understand the potential impact of climate change on the socio-hydrological system, to simulate and test various assumptions regarding farmer behavior and to evaluate policy interventions.
Validity criteria for Fermi's golden rule scattering rates applied to metallic nanowires.
Moors, Kristof; Sorée, Bart; Magnus, Wim
2016-09-14
Fermi's golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.
Statistical foundations of liquid-crystal theory: I. Discrete systems of rod-like molecules.
Seguin, Brian; Fried, Eliot
2012-12-01
We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals.
On the physical parameters for Centaurus X-3 and Hercules X-1.
NASA Technical Reports Server (NTRS)
Mccluskey, G. E., Jr.; Kondo, Y.
1972-01-01
It is shown how upper and lower limits on the physical parameters of X-ray sources in Centaurus X-3 and Hercules X-1 may be determined from a reasonably simple and straightforward consideration. The basic assumption is that component A (the non-X-ray emitting component) is not a star collapsing toward its Schwartzschild radius (i.e., a black hole). This assumption appears reasonable since component A (the radius of the central occulting star) appears to physically occult component X. If component A is a 'normal' star, both observation and theory indicate that its mass is not greater than about 60 solar masses. The possibility in which component X is either a neutron star or a white dwarf is considered.
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
A Comparative Study on Emerging Electric Vehicle Technology Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, Jonathan; Khowailed, Gannate; Blackburn, Julia
2011-03-01
Numerous organizations have published reports in recent years that investigate the ever changing world of electric vehicle (EV) technologies and their potential effects on society. Specifically, projections have been made on greenhouse gas (GHG) emissions associated with these vehicles and how they compare to conventional vehicles or hybrid electric vehicles (HEVs). Similar projections have been made on the volumes of oil that these vehicles can displace by consuming large amounts of grid electricity instead of petroleum-based fuels. Finally, the projected rate that these new vehicle fleets will enter the market varies significantly among organizations. New ideas, technologies, and possibilities aremore » introduced often, and projected values are likely to be refined as industry announcements continue to be made. As a result, over time, a multitude of projections for GHG emissions, oil displacement, and market penetration associated with various EV technologies has resulted in a wide range of possible future outcomes. This leaves the reader with two key questions: (1) Why does such a collective range in projected values exist in these reports? (2) What assumptions have the greatest impact on the outcomes presented in these reports? Since it is impractical for an average reader to review and interpret all the various vehicle technology reports published to date, Sentech Inc. and the Oak Ridge National Laboratory have conducted a comparative study to make these interpretations. The primary objective of this comparative study is to present a snapshot of all major projections made on GHG emissions, oil displacement, or market penetration rates of EV technologies. From the extensive data found in relevant publications, the key assumptions that drive each report's analysis are identified and 'apples-to-apples' comparisons between all major report conclusions are attempted. The general approach that was taken in this comparative study is comprised of six primary steps: (1) Search Relevant Literature - An extensive search of recent analyses that address the environmental impacts, market penetration rates, and oil displacement potential of various EV technologies was conducted; (2) Consolidate Studies - Upon completion of the literature search, a list of analyses that have sufficient data for comparison and that should be included in the study was compiled; (3) Identify Key Assumptions - Disparity in conclusions very likely originates from disparity in simple assumptions. In order to compare 'apples-to-apples,' key assumptions were identified in each study to provide the basis for comparing analyses; (4) Extract Information - Each selected report was reviewed, and information on key assumptions and data points was extracted; (5) Overlay Data Points - Visual representations of the comprehensive conclusions were prepared to identify general trends and outliers; and (6) Draw Final Conclusions - Once all comparisons are made to the greatest possible extent, the final conclusions were draw on what major factors lead to the variation in results among studies.« less
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Comments on ""Contact Diffusion Interaction of Materials with Cladding''
NASA Technical Reports Server (NTRS)
Morris, J. F.
1972-01-01
A Russian paper by A. A. Babad-Zakhryapina contributes much to the understanding of fuel, clad interactions, and thus to nuclear thermionic technology. In that publication the basic diffusion expression is a simple one. A more general but complicated equation for this mass transport results from the present work. With appropriate assumptions, however, the new relation reduces to Babad-Zakhryapina's version.
Walking or Running in the Rain--A Simple Derivation of a General Solution
ERIC Educational Resources Information Center
Ehrmann, Andrea; Blachowicz, Tomasz
2011-01-01
The question whether to walk slowly or to run when it starts raining in order to stay as dry as possible has been considered for many years--and with different results, depending on the assumptions made and the mathematical descriptions for the situation. Because of the practical meaning for real life and the inconsistent results depending on the…
NASA Technical Reports Server (NTRS)
Lin, Reng Rong; Palazzolo, A. B.; Kascak, A. F.; Montague, G.
1991-01-01
Theories and tests for incorporating piezoelectric pushers as actuator devices for active vibration control are discussed. It started from a simple model with the assumption of ideal pusher characteristics and progressed to electromechanical models with nonideal pushers. Effects on system stability due to the nonideal characteristics of piezoelectric pushers and other elements in the control loop were investigated.
NASA Astrophysics Data System (ADS)
Johnson, A. M.; Griffiths, J. H.
2007-05-01
At the 2005 Fall Meeting of the American Geophysical Union, Griffiths and Johnson [2005] introduced a method of extracting from the deformation-gradient (and velocity-gradient) tensor the amount and preferred orientation of simple-shear associated with 2-D shear zones and faults. Noting the 2-D is important because the shear zones and faults in Griffiths and Johnson [2005] were assumed non-dilatant and infinitely long, ignoring the scissors- like action along strike associated with shear zones and faults of finite length. Because shear zones and faults can dilate (and contract) normal to their walls and can have a scissors-like action associated with twisting about an axis normal to their walls, the more general method of detecting simple-shear is introduced and called MODES "method of detecting simple-shear." MODES can thus extract from the deformation-gradient (and velocity- gradient) tensor the amount and preferred orientation of simple-shear associated with 3-D shear zones and faults near or far from the Earth's surface, providing improvements and extensions to existing analytical methods used in active tectonics studies, especially strain analysis and dislocation theory. The derivation of MODES is based on one definition and two assumptions: by definition, simple-shear deformation becomes localized in some way; by assumption, the twirl within the deformation-gradient (or the spin within the velocity-gradient) is due to a combination of simple-shear and twist, and coupled with the simple- shear and twist is a dilatation of the walls of shear zones and faults. The preferred orientation is thus the orientation of the plane containing the simple-shear and satisfying the mechanical and kinematical boundary conditions. Results from a MODES analysis are illustrated by means of a three-dimensional diagram, the cricket- ball, which is reminiscent of the seismologist's "beach ball." In this poster, we present the underlying theory of MODES and illustrate how it works by analyzing the three- dimensional displacements measured with the Global Positioning System across the 1999 Chi-Chi earthquake ground rupture in Taiwan. In contrast to the deformation zone in the upper several meters of the ground below the surface detected by Yu et al. [2001], MODES determines the orientation and direction of shift of a shear zone representing the earthquake fault within the upper several hundred or thousand meters of ground below the surface. Thus, one value of the MODES analysis in this case is to provide boundary conditions for dislocation solutions for the subsurface shape of the main rupture during the earthquake.
NASA Astrophysics Data System (ADS)
Kalachev, L. V.
2016-06-01
We present a simple model of experimental setup for in vitro study of drug release from drug eluting stents and drug propagation in artificial tissue samples representing blood vessels. The model is further reduced using the assumption on vastly different characteristic diffusion times in the stent coating and in the artificial tissue. The model is used to derive a relationship between the times at which the measurements have to be taken for two experimental platforms, with corresponding artificial tissue samples made of different materials with different drug diffusion coefficients, to properly compare the drug release characteristics of drug eluting stents.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
NASA Astrophysics Data System (ADS)
Kang, Pilsang; Koo, Changhoi; Roh, Hokyu
2017-11-01
Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.
Focused attention in a simple dichotic listening task: an fMRI experiment.
Jäncke, Lutz; Specht, Karsten; Shah, Joni Nadim; Hugdahl, Kenneth
2003-04-01
Whole-head functional magnetic resonance imaging (fMRI) was used in nine neurologically intact subjects to measure the hemodynamic responses in the context of dichotic listening (DL). In order to eliminate the influence of verbal information processing, tones of different frequencies were used as stimuli. Three different dichotic listening tasks were used: the subjects were instructed to either concentrate on the stimuli presented in both ears (DIV), or only in the left (FL) or right (FR) ear and to monitor the auditory input for a specific target tone. When the target tone was detected, the subjects were required to indicate this by pressing a response button. Compared to the resting state, all dichotic listening tasks evoked strong hemodynamic responses within a distributed network comprising of temporal, parietal, and frontal brain areas. Thus, it is clear that dichotic listening makes use of various cognitive functions located within the dorsal and ventral stream of auditory information processing (i.e., the 'what' and 'where' streams). Comparing the three different dichotic listening conditions with each other only revealed a significant difference in the pre-SMA and within the left planum temporale area. The pre-SMA was generally more strongly activated during the DIV condition than during the FR and FL conditions. Within the planum temporale, the strongest activation was found during the FR condition and the weakest during the DIV condition. These findings were taken as evidence that even a simple dichotic listening task such as the one used here, makes use of a distributed neural network comprising of the dorsal and ventral stream of auditory information processing. In addition, these results support the previously made assumption that planum temporale activation is modulated by attentional strategies. Finally, the present findings uncovered that the pre-SMA, which is mostly thought to be involved in higher-order motor control processes, is also involved in cognitive processes operative during dichotic listening.
Behavior of Triple Langmuir Probes in Non-Equilibrium Plasmas
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Ratcliffe, Alicia C.
2018-01-01
The triple Langmuir probe is an electrostatic probe in which three probe tips collect current when inserted into a plasma. The triple probe differs from a simple single Langmuir probe in the nature of the voltage applied to the probe tips. In the single probe, a swept voltage is applied to the probe tip to acquire a waveform showing the collected current as a function of applied voltage (I-V curve). In a triple probe three probe tips are electrically coupled to each other with constant voltages applied between each of the tips. The voltages are selected such that they would represent three points on the single Langmuir probe I-V curve. Elimination of the voltage sweep makes it possible to measure time-varying plasma properties in transient plasmas. Under the assumption of a Maxwellian plasma, one can determine the time-varying plasma temperature T(sub e)(t) and number density n(sub e)(t) from the applied voltage levels and the time-histories of the collected currents. In the present paper we examine the theory of triple probe operation, specifically focusing on the assumption of a Maxwellian plasma. Triple probe measurements have been widely employed for a number of pulsed and timevarying plasmas, including pulsed plasma thrusters (PPTs), dense plasma focus devices, plasma flows, and fusion experiments. While the equilibrium assumption may be justified for some applications, it is unlikely that it is fully justifiable for all pulsed and time-varying plasmas or for all times during the pulse of a plasma device. To examine a simple non-equilibrium plasma case, we return to basic governing equations of probe current collection and compute the current to the probes for a distribution function consisting of two Maxwellian distributions with different temperatures (the two-temperature Maxwellian). A variation of this method is also employed, where one of the Maxwellians is offset from zero (in velocity space) to add a suprathermal beam of electrons to the tail of the main Maxwellian distribution (the bump-on-the-tail distribution function). For a range of parameters in these non-Maxwellian distributions, we compute the current collection to the probes. We compare the distribution function that was assumed a priori with the distribution function one would infer when applying standard triple probe theory to analyze the collected currents. For the assumed class of non-Maxwellian distribution functions this serves to illustrate the effect a non-Maxwellian plasma would have on results interpreted using the equilibrium triple probe current collection theory, allowing us to state the magnitudes of these deviations as a function of the assumed distribution function properties.
Ten simple rules for making research software more robust
2017-01-01
Software produced for research, published and otherwise, suffers from a number of common problems that make it difficult or impossible to run outside the original institution or even off the primary developer’s computer. We present ten simple rules to make such software robust enough to be run by anyone, anywhere, and thereby delight your users and collaborators. PMID:28407023
ERIC Educational Resources Information Center
Emig, Brandon R.; McDonald, Scott; Zembal-Saul, Carla; Strauss, Susan G.
2014-01-01
This study invited small groups to make several arguments by analogy about simple machines. Groups were first provided training on analogical (structure) mapping and were then invited to use analogical mapping as a scaffold to make arguments. In making these arguments, groups were asked to consider three simple machines: two machines that they had…
Making Simple Folk Instruments for Children.
ERIC Educational Resources Information Center
Cline, Dallas
1980-01-01
Instructions are provided for making these simple musical instruments from inexpensive materials: an Indian bull-roarer; bottle chimes; a ham can guitar; flower pot, box, and steel drums; a xylophone; a musical sawhorse; rattles; a melody box; and a box thumb harp. (SJL)
A simple model for estimating a magnetic field in laser-driven coils
Fiksel, Gennady; Fox, William; Gao, Lan; ...
2016-09-26
Magnetic field generation by laser-driven coils is a promising way of magnetizing plasma in laboratory high-energy-density plasma experiments. A typical configuration consists of two electrodes—one electrode is irradiated with a high-intensity laser beam and another electrode collects charged particles from the expanding plasma. The two electrodes are separated by a narrow gap forming a capacitor-like configuration and are connected with a conducting wire-coil. The charge-separation in the expanding plasma builds up a potential difference between the electrodes that drives the electrical current in the coil. A magnetic field of tens to hundreds of Teslas generated inside the coil has beenmore » reported. This paper presents a simple model that estimates the magnetic field using simple assumptions. Lastly, the results are compared with the published experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honorio, J.; Goldstein, R.; Honorio, J.
We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less
Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions
Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.
2015-01-01
Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.
Relating color working memory and color perception.
Allred, Sarah R; Flombaum, Jonathan I
2014-11-01
Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.
The vulnerabilities of teenage mothers: challenging prevailing assumptions.
SmithBattle, L
2000-09-01
The belief that early childbearing leads to poverty permeates our collective understanding. However, recent findings reveal that for many teens, mothering makes sense of the limited life options that precede their pregnancies. The author challenges several assumptions about teenage mothers and offers an alternative to the modern view of the unencumbered self that drives current responses to teen childbearing. This alternative perspective entails a situated view of the self and a broader notion of parenting and citizenship that supports teen mothers and affirms our mutual interdependence.
Putting semantics into the semantic web: how well can it capture biology?
Kazic, Toni
2006-01-01
Could the Semantic Web work for computations of biological interest in the way it's intended to work for movie reviews and commercial transactions? It would be wonderful if it could, so it's worth looking to see if its infrastructure is adequate to the job. The technologies of the Semantic Web make several crucial assumptions. I examine those assumptions; argue that they create significant problems; and suggest some alternative ways of achieving the Semantic Web's goals for biology.
Reform Higher Education with Capitalism? Doing Good and Making Money at the For-Profit Universities
ERIC Educational Resources Information Center
Berg, Gary A.
2005-01-01
For a business-person, the argument that making a profit leads to poor "product" quality would seem silly or worse, insulting. However, this is a common assumption in higher education. Similarly, increasing productivity (a typical business goal) has never been accepted as a worthy aspiration in higher education. In traditional…
School to Work Transitions in Europe: Choice and Constraints
ERIC Educational Resources Information Center
Cuconato, Morena
2017-01-01
Starting from the assumption that school to work transitions constitute not only the end goal but also an integral part of educational trajectories, this article reconstructs the narratives of the decision-making processes of young people at the end of lower secondary education, namely the ways in which decision-making is referred to, the temporal…
ERIC Educational Resources Information Center
Della-Piana, Gabriel; Della-Piana, Connie Kubo
This report describes a collection of procedures, with illustrative examples, for selecting and portraying microcomputer courseware in a manner that enables others to make their own judgments of courseware quality. Following a discussion of perspective and a report outline, section 3 deals with assumptions underlying the search to identify…
Making Practice Visible: A Collaborative Self-Study of Tiered Teaching in Teacher Education
ERIC Educational Resources Information Center
Garbett, Dawn; Heap, Rena
2011-01-01
In this article we document the impact of tiered teaching on making the complexity of pedagogy transparent when teaching science education to pre-service primary teachers. Teaching science methods classes together and researching our teaching has enabled us to reframe our assumptions and move beyond the simplistic and misleading idea that teacher…
ERIC Educational Resources Information Center
Royal, Kenneth D.
2010-01-01
Quality measurement is essential in every form of research, including institutional research and assessment. This paper addresses the erroneous assumptions institutional researchers often make with regard to survey research and provides an alternative method to producing more valid and reliable measures. Rasch measurement models are discussed and…
ERIC Educational Resources Information Center
Yamada, Shoko
2014-01-01
A School Management Committee (SMC) is an administrative tool adopted in many developing countries to decentralise administrative and financial responsibilities at school level, while involving local people in decision-making and making education more responsive to demands. I question the assumption linking administrative decentralisation and…
Xia, Fang; George, Stephen L.; Wang, Xiaofei
2015-01-01
In designing a clinical trial for comparing two or more treatments with respect to overall survival (OS), a proportional hazards assumption is commonly made. However, in many cancer clinical trials, patients pass through various disease states prior to death and because of this may receive treatments other than originally assigned. For example, patients may crossover from the control treatment to the experimental treatment at progression. Even without crossover, the survival pattern after progression may be very different than the pattern prior to progression. The proportional hazards assumption will not hold in these situations and the design power calculated on this assumption will not be correct. In this paper we describe a simple and intuitive multi-state model allowing for progression, death before progression, post-progression survival and crossover after progression and apply this model to the design of clinical trials for comparing the OS of two treatments. For given values of the parameters of the multi-state model, we simulate the required number of deaths to achieve a specified power and the distribution of time required to achieve the requisite number of deaths. The results may be quite different from those derived using the usual PH assumption. PMID:27239255
Alegría, Margarita
2013-01-01
In this study we consider the process of the clinical encounter, and present exemplars of how assumptions of both clinicians and their patients can shift or transform in the course of a diagnostic interview. We examine the process as it is recalled, and further elaborated, in post-diagnostic interviews as part of a collaborative inquiry during reflections with clinicians and patients in the northeastern United States. Rather than treating assumptions by patients and providers as a fixed attribute of an individual, we treat them as occurring between people within a particular social context, the diagnostic interview. We explore the diagnostic interview as a landscape in which assumptions occur (and can shift), navigate the features of this landscape, and suggest that our examination can best be achieved by the systematic comparison of views of the multiple actors in an experience-near manner. We describe what might be gained by this shift in assumptions and how it can make visible what is at stake for clinician and patient in their local moral worlds – for patients, acknowledgement of social suffering, for clinicians how assumptions are a barrier to engagement with minority patients. It is crucial for clinicians to develop this capacity for reflection when navigating the interactions with patients from different cultures, to recognize and transform assumptions, to notice ‘surprises’, and to elicit what really matters to patients in their care. PMID:19201074
Flood Protection Decision Making Within a Coupled Human and Natural System
NASA Astrophysics Data System (ADS)
O'Donnell, Greg; O'Connell, Enda
2013-04-01
Due to the perceived threat from climate change, prediction under changing climatic and hydrological conditions has become a dominant theme of hydrological research. Much of this research has been climate model-centric, in which GCM/RCM climate projections have been used to drive hydrological system models to explore potential impacts that should inform adaptation decision-making. However, adaptation fundamentally involves how humans may respond to increasing flood and drought hazards by changing their strategies, activities and behaviours which are coupled in complex ways to the natural systems within which they live and work. Humans are major agents of change in hydrological systems, and representing human activities and behaviours in coupled human and natural hydrological system models is needed to gain insight into the complex interactions that take place, and to inform adaptation decision-making. Governments and their agencies are under pressure to make proactive investments to protect people living in floodplains from the perceived increasing flood hazard. However, adopting this as a universal strategy everywhere is not affordable, particularly in times of economic stringency and given uncertainty about future climatic conditions. It has been suggested that the assumption of stationarity, which has traditionally been invoked in making hydrological risk assessments, is no longer tenable. However, before the assumption of hydrologic nonstationarity is accepted, the ability to cope with the uncertain impacts of global warming on water management via the operational assumption of hydrologic stationarity should be carefully examined. Much can be learned by focussing on natural climate variability and its inherent changes in assessing alternative adaptation strategies. A stationary stochastic multisite flood hazard model has been developed that can exhibit increasing variability/persistence in annual maximum floods, starting with the traditional assumption of independence. This has been coupled to an agent based model of how various stakeholders interact in determining where and when flood protection investments are made in a hypothetical region with multiple sites at risk from flood hazard. Monte Carlo simulation is used to explore how government agencies with finite resources might best invest in flood protection infrastructure in a highly variable climate with a high degree of future uncertainty. Insight is provided into whether proactive or reactive strategies are to be preferred in an increasingly variable climate.
Could CT screening for lung cancer ever be cost effective in the United Kingdom?
Whynes, David K
2008-01-01
Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756
Persuasion and economic efficiency. The cost-benefit analysis of banning abortion.
Nelson, J
1993-10-01
A simple cost-benefit approach to the abortion debate is unlikely to be persuasive if efficiency arguments conflict with widely held concepts of justice or rely on improbable notions of consent. Illustrative of the limitations of economic analyses are the models proposed by Meeks and Posner to make a case against abortion on demand. Meeks posits a tradeoff between the consumer surplus women gain from access to abortion and the expected loss of earnings that would have accrued to the aborted conceptuses. From here, Meeks derives the critical price elasticity that equates welfare gains and losses and argues that a ban on abortion represents a Kaldor-Hicks improvement in welfare if the price elasticity of demand falls above the critical level. Basic to his model are several questionable assumptions: an independence of ability to pay for an abortion and income, all women who select abortion have the same linear demand for the procedure, an abortion ban would eliminate the practice of abortion, economic efficiency generally requires slavery, and the morally relevant population includes the unborn. Posner, on the other hand, argues that an abortion ban would be efficient if the average surplus lost by a woman who chooses not to break the law is less than half the average value of the fetus saved. He assumes that it takes 1.83 abortions avoided to increase the population by 1 individual and favors reducing the current abortion rate by 30% rather than banning the procedure. Although Posner's model does not require specification of any particular value for the fetus, it neglects the increased health risk for pregnant women of illegal abortion. Moreover, Posner assumes that all women obey the law if it is in their economic interest to do so. Detrimental to both models is an assumption that sound normative judgments can be made on the basis of average values for observable data and the goal of maximizing wealth is logically prior to the specification of individual rights. It is concluded that economic arguments can be persuasive on the abortion issue only if there is agreement that cost-benefit analysis is an appropriate basis for decision making.
NASA Astrophysics Data System (ADS)
Ikramov, Kh. D.
2010-03-01
There are well-known conditions under which a complex n × n matrix A can be made real by a similarity transformation. Under the additional assumption that A has a simple real spectrum, a constructive answer is given to the question whether this transformation can be realized via a unitary rather than arbitrary similarity.
Evaluation of thyroid radioactivity measurement data from Hanford workers, 1944--1946
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikenberry, T.A.
1991-05-01
This report describes the preliminary results of an evaluation conducted in support of the Hanford Environmental Dose Reconstruction (HEDR) Project. The primary objective of the HEDR Project is to estimate the radiation doses that populations could have received from nuclear operations at the Hanford Site since 1944. A secondary objective is to make information that HEDR staff members used in estimate radiation doses available to the public. The objectives of this report to make available thyroid measurement data from Hanford workers for the year 1944 through 1946, and to investigate the suitability of those data for use in the HEDRmore » dose estimation process. An important part of this investigation was to provide a description of the uncertainty associated with the data. Lack of documentation on thyroid measurements from this period required that assumptions be made to perform data evaluations. These assumptions introduce uncertainty into the evaluations that could be significant. It is important to recognize the nature of these assumptions, the inherent uncertainty, and the propagation of this uncertainty, and the propagation of this uncertainty through data evaluations to any conclusions that can be made by using the data. 15 refs., 1 fig., 5 tabs.« less
Alden, Dana L; Friend, John; Lee, Ping Yein; Lee, Yew Kong; Trevena, Lyndal; Ng, Chirk Jenn; Kiatpongsan, Sorapop; Lim Abdullah, Khatijah; Tanaka, Miho; Limpongsanurak, Supanida
2018-01-01
Research suggests that desired family involvement (FI) in medical decision making may depend on cultural values. Unfortunately, the field lacks cross-cultural studies that test this assumption. As a result, providers may be guided by incomplete information or cultural biases rather than patient preferences. Researchers developed 6 culturally relevant disease scenarios varying from low to high medical seriousness. Quota samples of approximately 290 middle-aged urban residents in Australia, China, Malaysia, India, South Korea, Thailand, and the USA completed an online survey that examined desired levels of FI and identified individual difference predictors in each country. All reliability coefficients were acceptable. Regression models met standard assumptions. The strongest finding across all 7 countries was that those who desired higher self-involvement (SI) in medical decision making also wanted lower FI. On the other hand, respondents who valued relational-interdependence tended to want their families involved - a key finding in 5 of 7 countries. In addition, in 4 of 7 countries, respondents who valued social hierarchy desired higher FI. Other antecedents were less consistent. These results suggest that it is important for health providers to avoid East-West cultural stereotypes. There are meaningful numbers of patients in all 7 countries who want to be individually involved and those individuals tend to prefer lower FI. On the other hand, more interdependent patients are likely to want families involved in many of the countries studied. Thus, individual differences within culture appear to be important in predicting whether a patient desires FI. For this reason, avoiding culture-based assumptions about desired FI during medical decision making is central to providing more effective patient centered care.
2017-01-01
Studies of animal personality attempt to uncover underlying or “latent” personality traits that explain broad patterns of behaviour, often by applying latent variable statistical models (e.g., factor analysis) to multivariate data sets. Two integral, but infrequently confirmed, assumptions of latent variable models in animal personality are: i) behavioural variables are independent (i.e., uncorrelated) conditional on the latent personality traits they reflect (local independence), and ii) personality traits are associated with behavioural variables in the same way across individuals or groups of individuals (measurement invariance). We tested these assumptions using observations of aggression in four age classes (4–10 months, 10 months–3 years, 3–6 years, over 6 years) of male and female shelter dogs (N = 4,743) in 11 different contexts. A structural equation model supported the hypothesis of two positively correlated personality traits underlying aggression across contexts: aggressiveness towards people and aggressiveness towards dogs (comparative fit index: 0.96; Tucker-Lewis index: 0.95; root mean square error of approximation: 0.03). Aggression across contexts was moderately repeatable (towards people: intraclass correlation coefficient (ICC) = 0.479; towards dogs: ICC = 0.303). However, certain contexts related to aggressiveness towards people (but not dogs) shared significant residual relationships unaccounted for by latent levels of aggressiveness. Furthermore, aggressiveness towards people and dogs in different contexts interacted with sex and age. Thus, sex and age differences in displays of aggression were not simple functions of underlying aggressiveness. Our results illustrate that the robustness of traits in latent variable models must be critically assessed before making conclusions about the effects of, or factors influencing, animal personality. Our findings are of concern because inaccurate “aggressive personality” trait attributions can be costly to dogs, recipients of aggression and society in general. PMID:28854267
Xu, Hongjuan; Weber, Stephen G.
2006-01-01
A post-column reactor consisting of a simple open tube (Capillary Taylor Reactor) affects the performance of a capillary LC in two ways: stealing pressure from the column and adding band spreading. The former is a problem for very small radius reactors, while the latter shows itself for large reactor diameters. We derived an equation that defines the observed number of theoretical plates (Nobs) taking into account the two effects stated above. Making some assumptions and asserting certain conditions led to a final equation with a limited number of variables, namely chromatographic column radius, reactor radius and chromatographic particle diameter. The assumptions and conditions are that the van Deemter equation applies, the mass transfer limitation is for intraparticle diffusion in spherical particles, the velocity is at the optimum, the analyte’s retention factor, k′, is zero, the post-column reactor is only long enough to allow complete mixing of reagents and analytes and the maximum operating pressure of the pumping system is used. Optimal ranges of the reactor radius (ar) are obtained by comparing the number of observed theoretical plates (and theoretical plates per time) with and without a reactor. Results show that the acceptable reactor radii depend on column diameter, particle diameter, and maximum available pressure. Optimal ranges of ar become narrower as column diameter increases, particle diameter decreases or the maximum pressure is decreased. When the available pressure is 4000 psi, a Capillary Taylor Reactor with 12 μm radius is suitable for all columns smaller than 150 μm (radius) packed with 2–5 μm particles. For 1 μm packing particles, only columns smaller than 42.5 μm (radius) can be used and the reactor radius needs to be 5 μm. PMID:16494886
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Solar r-process-constrained actinide production in neutrino-driven winds of supernovae
NASA Astrophysics Data System (ADS)
Goriely, S.; Janka, H.-Th.
2016-07-01
Long-lived radioactive nuclei play an important role as nucleo-cosmochronometers and as cosmic tracers of nucleosynthetic source activity. In particular, nuclei in the actinide region like thorium, uranium, and plutonium can testify to the enrichment of an environment by the still enigmatic astrophysical sources that are responsible for the production of neutron-rich nuclei by the rapid neutron-capture process (r-process). Supernovae and merging neutron-star (NS) or NS-black hole binaries are considered as most likely sources of the r-nuclei. But arguments in favour of one or the other or both are indirect and make use of assumptions; they are based on theoretical models with remaining simplifications and shortcomings. An unambiguous observational determination of a production event is still missing. In order to facilitate searches in this direction, e.g. by looking for radioactive tracers in stellar envelopes, the interstellar medium or terrestrial reservoirs, we provide improved theoretical estimates and corresponding uncertainty ranges for the actinide production (232Th, 235, 236, 238U, 237Np, 244Pu, and 247Cm) in neutrino-driven winds of core-collapse supernovae. Since state-of-the-art supernova models do not yield r-process viable conditions - but still lack, for example, the effects of strong magnetic fields - we base our investigation on a simple analytical, Newtonian, adiabatic and steady-state wind model and consider the superposition of a large number of contributing components, whose nucleosynthesis-relevant parameters (mass weight, entropy, expansion time-scale, and neutron excess) are constrained by the assumption that the integrated wind nucleosynthesis closely reproduces the Solar system distribution of r-process elements. We also test the influence of uncertain nuclear physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less
Andronis, Lazaros; Barton, Pelham M
2016-04-01
Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
Zhang, Wengang; Douglas, Jack F; Starr, Francis W
2018-05-29
There is significant variation in the reported magnitude and even the sign of [Formula: see text] shifts in thin polymer films with nominally the same chemistry, film thickness, and supporting substrate. The implicit assumption is that methods used to estimate [Formula: see text] in bulk materials are relevant for inferring dynamic changes in thin films. To test the validity of this assumption, we perform molecular simulations of a coarse-grained polymer melt supported on an attractive substrate. As observed in many experiments, we find that [Formula: see text] based on thermodynamic criteria (temperature dependence of film height or enthalpy) decreases with decreasing film thickness, regardless of the polymer-substrate interaction strength ε. In contrast, we find that [Formula: see text] based on a dynamic criterion (relaxation of the dynamic structure factor) also decreases with decreasing thickness when ε is relatively weak, but [Formula: see text] increases when ε exceeds the polymer-polymer interaction strength. We show that these qualitatively different trends in [Formula: see text] reflect differing sensitivities to the mobility gradient across the film. Apparently, the slowly relaxing polymer segments in the substrate region make the largest contribution to the shift of [Formula: see text] in the dynamic measurement, but this part of the film contributes less to the thermodynamic estimate of [Formula: see text] Our results emphasize the limitations of using [Formula: see text] to infer changes in the dynamics of polymer thin films. However, we show that the thermodynamic and dynamic estimates of [Formula: see text] can be combined to predict local changes in [Formula: see text] near the substrate, providing a simple method to infer information about the mobility gradient.
Goold, Conor; Newberry, Ruth C
2017-01-01
Studies of animal personality attempt to uncover underlying or "latent" personality traits that explain broad patterns of behaviour, often by applying latent variable statistical models (e.g., factor analysis) to multivariate data sets. Two integral, but infrequently confirmed, assumptions of latent variable models in animal personality are: i) behavioural variables are independent (i.e., uncorrelated) conditional on the latent personality traits they reflect (local independence), and ii) personality traits are associated with behavioural variables in the same way across individuals or groups of individuals (measurement invariance). We tested these assumptions using observations of aggression in four age classes (4-10 months, 10 months-3 years, 3-6 years, over 6 years) of male and female shelter dogs (N = 4,743) in 11 different contexts. A structural equation model supported the hypothesis of two positively correlated personality traits underlying aggression across contexts: aggressiveness towards people and aggressiveness towards dogs (comparative fit index: 0.96; Tucker-Lewis index: 0.95; root mean square error of approximation: 0.03). Aggression across contexts was moderately repeatable (towards people: intraclass correlation coefficient (ICC) = 0.479; towards dogs: ICC = 0.303). However, certain contexts related to aggressiveness towards people (but not dogs) shared significant residual relationships unaccounted for by latent levels of aggressiveness. Furthermore, aggressiveness towards people and dogs in different contexts interacted with sex and age. Thus, sex and age differences in displays of aggression were not simple functions of underlying aggressiveness. Our results illustrate that the robustness of traits in latent variable models must be critically assessed before making conclusions about the effects of, or factors influencing, animal personality. Our findings are of concern because inaccurate "aggressive personality" trait attributions can be costly to dogs, recipients of aggression and society in general.
NASA Astrophysics Data System (ADS)
Benilov, E. S.
2018-05-01
This paper examines quasigeostrophic flows in an ocean that can be subdivided into an upper active layer (AL) and a lower passive layer (PL), with the flow and density stratification mainly confined to the former. Under this assumption, an asymptotic model is derived parameterizing the effect of the PL on the AL. The model depends only on the PL's depth, whereas its Väisälä-Brunt frequency turns out to be unimportant (as long as it is small). Under an additional assumption-that the potential vorticity field in the PL is well-diffused and, thus, uniform-the derived model reduces to a simple boundary condition. This condition is to be applied at the AL/PL interface, after which the PL can be excluded from consideration.
ECOLOGICAL THEORY. A general consumer-resource population model.
Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M
2015-08-21
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.
Validity criteria for Fermi’s golden rule scattering rates applied to metallic nanowires
NASA Astrophysics Data System (ADS)
Moors, Kristof; Sorée, Bart; Magnus, Wim
2016-09-01
Fermi’s golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.
Testing the mean for dependent business data.
Liang, Jiajuan; Martin, Linda
2008-01-01
In business data analysis, it is well known that the comparison of several means is usually carried out by the F-test in analysis of variance under the assumption of independently collected data from all populations. This assumption, however, is likely to be violated in survey data collected from various questionnaires or time-series data. As a result, it is not justifiable or problematic to apply the traditional F-test to comparison of dependent means directly. In this article, we develop a generalized F-test for comparing population means with dependent data. Simulation studies show that the proposed test has a simple approximate null distribution and feasible finite-sample properties. Applications of the proposed test in analysis of survey data and time-series data are illustrated by two real datasets.
ERIC Educational Resources Information Center
Brant, Jacek Wiktor
2015-01-01
In the wake of current world financial crisis serious efforts are being made to rethink the dominant economic assumptions. There is a growing movement in universities to make economics more relevant and to embrace an understanding of diverse models. Additionally, philosophical schools such as critical realism have provided new tools for thinking…
Data, Dyads, and Dynamics: Exploring Data Use and Social Networks in Educational Improvement
ERIC Educational Resources Information Center
Daly, Alan J.
2012-01-01
Background: In the past decade, there has been an increasing national policy push for educators to systematically collect, interpret, and use data for instructional decision making. The assumption by the federal government is that having data systems will be enough to prompt the use of data for a wide range of decision making. These policies rely…
NASA Astrophysics Data System (ADS)
Shonnard, David R.; Klemetsrud, Bethany; Sacramento-Rivero, Julio; Navarro-Pineda, Freddy; Hilbert, Jorge; Handler, Robert; Suppen, Nydia; Donovan, Richard P.
2015-12-01
Life-cycle assessment (LCA) has been applied to many biofuel and bioenergy systems to determine potential environmental impacts, but the conclusions have varied. Different methodologies and processes for conducting LCA of biofuels make the results difficult to compare, in-turn making it difficult to make the best possible and informed decision. Of particular importance are the wide variability in country-specific conditions, modeling assumptions, data quality, chosen impact categories and indicators, scale of production, system boundaries, and co-product allocation. This study has a double purpose: conducting a critical evaluation comparing environmental LCA of biofuels from several conversion pathways and in several countries in the Pan American region using both qualitative and quantitative analyses, and making recommendations for harmonization with respect to biofuel LCA study features, such as study assumptions, inventory data, impact indicators, and reporting practices. The environmental management implications are discussed within the context of different national and international regulatory environments using a case study. The results from this study highlight LCA methodology choices that cause high variability in results and limit comparability among different studies, even among the same biofuel pathway, and recommendations are provided for improvement.
Shonnard, David R; Klemetsrud, Bethany; Sacramento-Rivero, Julio; Navarro-Pineda, Freddy; Hilbert, Jorge; Handler, Robert; Suppen, Nydia; Donovan, Richard P
2015-12-01
Life-cycle assessment (LCA) has been applied to many biofuel and bioenergy systems to determine potential environmental impacts, but the conclusions have varied. Different methodologies and processes for conducting LCA of biofuels make the results difficult to compare, in-turn making it difficult to make the best possible and informed decision. Of particular importance are the wide variability in country-specific conditions, modeling assumptions, data quality, chosen impact categories and indicators, scale of production, system boundaries, and co-product allocation. This study has a double purpose: conducting a critical evaluation comparing environmental LCA of biofuels from several conversion pathways and in several countries in the Pan American region using both qualitative and quantitative analyses, and making recommendations for harmonization with respect to biofuel LCA study features, such as study assumptions, inventory data, impact indicators, and reporting practices. The environmental management implications are discussed within the context of different national and international regulatory environments using a case study. The results from this study highlight LCA methodology choices that cause high variability in results and limit comparability among different studies, even among the same biofuel pathway, and recommendations are provided for improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foreman-Mackey, Daniel; Hogg, David W.; Morton, Timothy D., E-mail: danfm@nyu.edu
No true extrasolar Earth analog is known. Hundreds of planets have been found around Sun-like stars that are either Earth-sized but on shorter periods, or else on year-long orbits but somewhat larger. Under strong assumptions, exoplanet catalogs have been used to make an extrapolated estimate of the rate at which Sun-like stars host Earth analogs. These studies are complicated by the fact that every catalog is censored by non-trivial selection effects and detection efficiencies, and every property (period, radius, etc.) is measured noisily. Here we present a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets,more » taking into account survey completeness and, for the first time, observational uncertainties. We are able to make fewer assumptions about the distribution than previous studies; we only require that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process). By applying our method to synthetic catalogs, we demonstrate that it produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency. We apply the method to an existing catalog of small planet candidates around G dwarf stars. We confirm a previous result that the radius distribution changes slope near Earth's radius. We find that the rate density of Earth analogs is about 0.02 (per star per natural logarithmic bin in period and radius) with large uncertainty. This number is much smaller than previous estimates made with the same data but stronger assumptions.« less
The time-dependent response of 3- and 5-layer sandwich beams
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.
1992-01-01
Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.
Locomotion of C. elegans: A Piecewise-Harmonic Curvature Representation of Nematode Behavior
Padmanabhan, Venkat; Khan, Zeina S.; Solomon, Deepak E.; Armstrong, Andrew; Rumbaugh, Kendra P.; Vanapalli, Siva A.; Blawzdziewicz, Jerzy
2012-01-01
Caenorhabditis elegans, a free-living soil nematode, displays a rich variety of body shapes and trajectories during its undulatory locomotion in complex environments. Here we show that the individual body postures and entire trails of C. elegans have a simple analytical description in curvature representation. Our model is based on the assumption that the curvature wave is generated in the head segment of the worm body and propagates backwards. We have found that a simple harmonic function for the curvature can capture multiple worm shapes during the undulatory movement. The worm body trajectories can be well represented in terms of piecewise sinusoidal curvature with abrupt changes in amplitude, wavevector, and phase. PMID:22792224
Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity
NASA Astrophysics Data System (ADS)
Christian, J. M.
2017-09-01
Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
A simple threshold rule is sufficient to explain sophisticated collective decision-making.
Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R
2011-01-01
Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.
Jit, Mark; Bilcke, Joke; Mangen, Marie-Josée J; Salo, Heini; Melliez, Hugues; Edmunds, W John; Yazdan, Yazdanpanah; Beutels, Philippe
2009-10-19
Cost-effectiveness analyses are usually not directly comparable between countries because of differences in analytical and modelling assumptions. We investigated the cost-effectiveness of rotavirus vaccination in five European Union countries (Belgium, England and Wales, Finland, France and the Netherlands) using a single model, burden of disease estimates supplied by national public health agencies and a subset of common assumptions. Under base case assumptions (vaccination with Rotarix, 3% discount rate, health care provider perspective, no herd immunity and quality of life of one caregiver affected by a rotavirus episode) and a cost-effectiveness threshold of euro30,000, vaccination is likely to be cost effective in Finland only. However, single changes to assumptions may make it cost effective in Belgium and the Netherlands. The estimated threshold price per dose for Rotarix (excluding administration costs) to be cost effective was euro41 in Belgium, euro28 in England and Wales, euro51 in Finland, euro36 in France and euro46 in the Netherlands.
A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma
NASA Technical Reports Server (NTRS)
Hesse, Michael; Zenitani, Seiji; Kuznetova, Masha; Klimas, Alex
2011-01-01
A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region, and to impart thermal energy to the plasma by means of quasi-viscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative, procedure. The solutions show expected features such as dominance of enthalpy flux in the reconnection outflow, as well as combination of adiabatic and quasi-viscous heating. Furthermore, the model predicts a maximum reconnection electric field of E(sup *)=0.4, normalized to the parameters at the inflow edge of the diffusion region.
A simple, analytical model of collisionless magnetic reconnection in a pair plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha
2009-10-15
A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region and to impart thermal energy to the plasma by means of quasiviscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative procedure. The solutions show expected features such as dominance of enthalpymore » flux in the reconnection outflow, as well as combination of adiabatic and quasiviscous heating. Furthermore, the model predicts a maximum reconnection electric field of E{sup *}=0.4, normalized to the parameters at the inflow edge of the diffusion region.« less
77 FR 76604 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... currently approved collection. Title: Form 5304-SIMPLE; Form 5305-SIMPLE; Notice 98-4. Abstract: Forms 5304-SIMPLE and 5035-SIMPLE are used by an employer to permit employees to make salary reduction contributions to a savings incentive match plan (SIMPLE IRA) described in Code section 408(p). These forms are not...
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach practical. We recommend its use in lieu of the less accurate approach in the conventional group analysis. PMID:22245637
ERIC Educational Resources Information Center
American Vocational Association, Alexandria, VA.
This document is a practical guide to demonstrating the value of school-to-careers preparation for all students and to debunking outdated stereotypes and false assumptions surrounding school-to-careers and vocational education programs. Part 1 explains the importance of political and policy advocacy in public education and outlines strategies for…
48 CFR 52.211-8 - Time of Delivery.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Delivery. As prescribed in 11.404(a)(2), insert the following clause: Time of Delivery (JUN 1997) (a) The... assumption that the Government will make award by __ [Contracting Officer insert date]. Each delivery date in...
48 CFR 52.211-8 - Time of Delivery.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Delivery. As prescribed in 11.404(a)(2), insert the following clause: Time of Delivery (JUN 1997) (a) The... assumption that the Government will make award by __ [Contracting Officer insert date]. Each delivery date in...
48 CFR 52.211-8 - Time of Delivery.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Delivery. As prescribed in 11.404(a)(2), insert the following clause: Time of Delivery (JUN 1997) (a) The... assumption that the Government will make award by __ [Contracting Officer insert date]. Each delivery date in...
48 CFR 52.211-8 - Time of Delivery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Delivery. As prescribed in 11.404(a)(2), insert the following clause: Time of Delivery (JUN 1997) (a) The... assumption that the Government will make award by __ [Contracting Officer insert date]. Each delivery date in...
48 CFR 52.211-8 - Time of Delivery.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Delivery. As prescribed in 11.404(a)(2), insert the following clause: Time of Delivery (JUN 1997) (a) The... assumption that the Government will make award by __ [Contracting Officer insert date]. Each delivery date in...
Systematic Assessment of the Impact of User Roles on Network Flow Patterns
2017-09-01
Protocol SNMP Simple Network Management Protocol SQL Structured Query Language SSH Secure Shell SYN TCP Sync Flag SVDD Support Vector Data Description SVM...and evaluating users based on roles provide the best approach for defining normal digital behaviors? People are individuals, with different interests...activities on the network. We evaluate the assumption that users sharing similar roles exhibit similar network behaviors, and contrast the level of similarity
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.
1978-01-01
The semiclassical transition probability was incorporated in the simulation for energy exchange between rotational and translational energy. The results provide details on the fundamental mechanisms of gas kinetics where analytical methods were impractical. The validity of the local Maxwellian assumption and relaxation time, rotational-translational energy transition, and a velocity analysis of the inelastic collision were discussed in detail.
Risk and value analysis of SETI
NASA Technical Reports Server (NTRS)
Billingham, J.
1990-01-01
This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.
Recombination-generation currents in degenerate semiconductors
NASA Technical Reports Server (NTRS)
Von Roos, O.
1978-01-01
The classical Shockley-Read-Hall theory of free carrier recombination and generation via traps is extended to degenerate semiconductors. A concise and simple expression is found which avoids completely the concept of a Fermi level, a concept which is alien to nonequilibrium situations. Assumptions made in deriving the recombination generation current are carefully delineated and are found to be basically identical to those made in the original theory applicable to nondegenerate semiconductors.
Does faint galaxy clustering contradict gravitational instability?
NASA Technical Reports Server (NTRS)
Melott, Adrian L.
1992-01-01
It has been argued, based on the weakness of clustering of faint galaxies, that these objects cannot be the precursors of present galaxies in a simple Einstein-de Sitter model universe with clustering driven by gravitational instability. It is shown that the assumptions made about the growth of clustering were too restrictive. In such a universe, the growth of clustering can easily be fast enough to match the data.
Modeling of air pollution from the power plant ash dumps
NASA Astrophysics Data System (ADS)
Aleksic, Nenad M.; Balać, Nedeljko
A simple model of air pollution from power plant ash dumps is presented, with emission rates calculated from the Bagnold formula and transport simulated by the ATDL type model. Moisture effects are accounted for by assumption that there is no pollution on rain days. Annual mean daily sedimentation rates, calculated for the area around the 'Nikola Tesla' power plants near Belgrade for 1987, show reasonably good agreement with observations.
Valuation of financial models with non-linear state spaces
NASA Astrophysics Data System (ADS)
Webber, Nick
2001-02-01
A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.
Differential equations in airplane mechanics
NASA Technical Reports Server (NTRS)
Carleman, M T
1922-01-01
In the following report, we will first draw some conclusions of purely theoretical interest, from the general equations of motion. At the end, we will consider the motion of an airplane, with the engine dead and with the assumption that the angle of attack remains constant. Thus we arrive at a simple result, which can be rendered practically utilizable for determining the trajectory of an airplane descending at a constant steering angle.
Technical Report for the Period 1 October 1987 - 30 September 1989
1990-03-01
low pass filter results. -dt dt specifies the sampling rate in seconds. -gin specifies .w file (binary waveform data) input. - gout specifies .w file...waves arriving at moderate incidence angles, * high signal-to-noise ratio (SNR). The following assumptions are made, for simplicity* * additive...spatially uncorrelated noise, * simple signal model, free of refraction and scattering effects. This study is limited to the case of a plane incident P
Risk and value analysis of SETI.
Billingham, J
1990-01-01
This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.
Microscopically derived potential energy surfaces from mostly structural considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ermamatov, M.J.; Institute of Nuclear Physics, Ulughbek, Tashkent 100214; Hess, Peter O., E-mail: hess@nucleares.unam.mx
2016-08-15
A simple procedure to estimate the quadrupole Potential-Energy-Surface (PES) is presented, using mainly structural information, namely the content of the shell model space and the Pauli exclusion principle. Further microscopic properties are implicitly contained through the use of results from the Möller and Nix tables or experimental information. A mapping to the geometric potential is performed yielding the PES. The General Collective Model is used in order to obtain an estimate on the spectrum and quadrupole transitions, adjusting only the mass parameter. First, we test the conjecture on known nuclei, deriving the PES and compare them to known data. Wemore » will see that the PES approximates very well the structure expected. Having acquired a certain confidence, we predict the PES of several chain of isotopes of heavy and super-heavy nuclei and at the end we investigate the structure of nuclei in the supposed island of stability. One of the main points to show is that simple assumptions can provide already important information on the structure of nuclei outside known regions and that spectra and electromagnetic transitions can be estimated without using involved calculations and assumptions. The procedure does not allow to calculate binding energies. The method presented can be viewed as a starting point for further improvements.« less
Climate system properties determining the social cost of carbon
NASA Astrophysics Data System (ADS)
Otto, Alexander; Todd, Benjamin J.; Bowerman, Niel; Frame, David J.; Allen, Myles R.
2013-06-01
The choice of an appropriate scientific target to guide global mitigation efforts is complicated by uncertainties in the temperature response to greenhouse gas emissions. Much climate policy discourse has been based on the equilibrium global mean temperature increase following a concentration stabilization scenario. This is determined by the equilibrium climate sensitivity (ECS) which, in many studies, shows persistent, fat-tailed uncertainty. However, for many purposes, the equilibrium response is less relevant than the transient response. Here, we show that one prominent policy variable, the social cost of carbon (SCC), is generally better constrained by the transient climate response (TCR) than by the ECS. Simple analytic expressions show the SCC to be directly proportional to the TCR under idealized assumptions when the rate at which we discount future damage equals 2.8%. Using ensemble simulations of a simple climate model we find that knowing the true value of the TCR can reduce the relative uncertainty in the SCC substantially more, up to a factor of 3, than knowing the ECS under typical discounting assumptions. We conclude that the TCR, which is better constrained by observations, less subject to fat-tailed uncertainty and more directly related to the SCC, is generally preferable to the ECS as a single proxy for the climate response in SCC calculations.
An eco-hydrologic model of malaria outbreaks
NASA Astrophysics Data System (ADS)
Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.
2012-03-01
Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission and their consideration alongside climatic datasets. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear eco-hydrologic model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.
An ecohydrological model of malaria outbreaks
NASA Astrophysics Data System (ADS)
Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.
2012-08-01
Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission driven by climatic time series. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear ecohydrological model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.
Ductile Fracture Initiation of Anisotropic Metal Sheets
NASA Astrophysics Data System (ADS)
Dong, Liang; Li, Shuhui; He, Ji
2017-07-01
The objective of this research is to investigate the influence of material plastic anisotropy on ductile fracture in the strain space under the assumption of plane stress state for sheet metals. For convenient application, a simple expression is formulated by the method of total strain theory under the assumption of proportional loading. The Hill 1948 quadratic anisotropic yield model and isotropic hardening flow rule are adopted to describe the plastic response of the material. The Mohr-Coulomb model is revisited to describe the ductile fracture in the stress space. Besides, the fracture locus for DP590 in different loading directions is obtained by experiments. Four different types of tensile test specimens, including classical dog bone, flat with cutouts, flat with center holes and pure shear, are performed to fracture. All these specimens are prepared with their longitudinal axis inclined with the angle of 0°, 45°, and 90° to the rolling direction, respectively. A 3D digital image correlation system is used in this study to measure the anisotropy parameter r 0, r 45, r 90 and the equivalent strains to fracture for all the tests. The results show that the material plastic anisotropy has a remarkable influence on the fracture locus in the strain space and can be predicted accurately by the simple expression proposed in this study.
Estimating population trends with a linear model: Technical comments
Sauer, John R.; Link, William A.; Royle, J. Andrew
2004-01-01
Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.
NASA Astrophysics Data System (ADS)
Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2018-02-01
Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.
Sims, Tamara; Holmes, Tyson H.; Bravata, Dena M.; Garber, Alan M.; Nelson, Lorene M.; Goldstein, Mary K.
2008-01-01
Objective To use unweighted counts of dependencies in Activities of Daily Living (ADLs) to assess the impact of functional impairment requires an assumption of equal preferences for each ADL dependency. To test this assumption, we analyzed standard gamble utilities of single and combination ADL dependencies among older adults. Study Design and Setting: Four hundred older adults used multimedia software (FLAIR1) to report standard gamble utilities for their current health and hypothetical health states of dependency in each of 7 ADLs and 8 of 30 combinations of ADL dependencies. Results Utilities for health states of multiple ADL dependencies were often greater than for states of single ADL dependencies. Dependence in eating, the ADL dependency with the lowest utility rating of the single ADL dependencies, ranked lower than 7 combination states. Similarly, some combination states with fewer ADL dependencies had lower utilities than those with more ADL dependencies. These findings were consistent across groups by gender, age, and education. Conclusion Our results suggest that the count of ADL dependencies does not adequately represent the utility for a health state. Cost-effectiveness analyses and other evaluations of programs that prevent or treat functional dependency should apply utility-weights rather than relying on simple ADL counts. PMID:18722749
Abril Hernández, José-María
2015-05-01
After half a century, the use of unsupported (210)Pb ((210)Pbexc) is still far off from being a well established dating tool for recent sediments with widespread applicability. Recent results from the statistical analysis of time series of fluxes, mass sediment accumulation rates (SAR), and initial activities, derived from varved sediments, place serious constraints to the assumption of constant fluxes, which is widely used in dating models. The Sediment Isotope Tomography (SIT) model, under the assumption of non post-depositional redistribution, is used for dating recent sediments in scenarios in that fluxes and SAR are uncorrelated and both vary with time. By using a simple graphical analysis, this paper shows that under the above assumptions, any given (210)Pbexc profile, even with the restriction of a discrete set of reference points, is compatible with an infinite number of chronological lines, and thus generating an infinite number of mathematically exact solutions for histories of initial activity concentrations, SAR and fluxes onto the SWI, with these two last ranging from zero up to infinity. Particularly, SIT results, without additional assumptions, cannot contain any statistically significant difference with respect to the exact solutions consisting in intervals of constant SAR or constant fluxes (both being consistent with the reference points). Therefore, there is not any benefit in its use as a dating tool without the explicit introduction of additional restrictive assumptions about fluxes, SAR and/or their interrelationship. Copyright © 2015 Elsevier Ltd. All rights reserved.
Normal uniform mixture differential gene expression detection for cDNA microarrays
Dean, Nema; Raftery, Adrian E
2005-01-01
Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Hwang, Ho-Ling
2007-01-01
This paper describes a development of national freight demand models for 27 industry sectors covered by the 2002 Commodity Flow Survey. It postulates that the national freight demands are consistent with U.S. business patterns. Furthermore, the study hypothesizes that the flow of goods, which make up the national production processes of industries, is coherent with the information described in the 2002 Annual Input-Output Accounts developed by the Bureau of Economic Analysis. The model estimation framework hinges largely on the assumption that a relatively simple relationship exists between freight production/consumption and business patterns for each industry defined by the three-digit Northmore » American Industry Classification System industry codes (NAICS). The national freight demand model for each selected industry sector consists of two models; a freight generation model and a freight attraction model. Thus, a total of 54 simple regression models were estimated under this study. Preliminary results indicated promising freight generation and freight attraction models. Among all models, only four of them had a R2 value lower than 0.70. With additional modeling efforts, these freight demand models could be enhanced to allow transportation analysts to assess regional economic impacts associated with temporary lost of transportation services on U.S. transportation network infrastructures. Using such freight demand models and available U.S. business forecasts, future national freight demands could be forecasted within certain degrees of accuracy. These freight demand models could also enable transportation analysts to further disaggregate the CFS state-level origin-destination tables to county or zip code level.« less
NASA Astrophysics Data System (ADS)
van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire
2017-04-01
If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Floristic composition and across-track reflectance gradient in Landsat images over Amazonian forests
NASA Astrophysics Data System (ADS)
Muro, Javier; doninck, Jasper Van; Tuomisto, Hanna; Higgins, Mark A.; Moulatlet, Gabriel M.; Ruokolainen, Kalle
2016-09-01
Remotely sensed image interpretation or classification of tropical forests can be severely hampered by the effects of the bidirectional reflection distribution function (BRDF). Even for narrow swath sensors like Landsat TM/ETM+, the influence of reflectance anisotropy can be sufficiently strong to introduce a cross-track reflectance gradient. If the BRDF could be assumed to be linear for the limited swath of Landsat, it would be possible to remove this gradient during image preprocessing using a simple empirical method. However, the existence of natural gradients in reflectance caused by spatial variation in floristic composition of the forest can restrict the applicability of such simple corrections. Here we use floristic information over Peruvian and Brazilian Amazonia acquired through field surveys, complemented with information from geological maps, to investigate the interaction of real floristic gradients and the effect of reflectance anisotropy on the observed reflectances in Landsat data. In addition, we test the assumption of linearity of the BRDF for a limited swath width, and whether different primary non-inundated forest types are characterized by different magnitudes of the directional reflectance gradient. Our results show that a linear function is adequate to empirically correct for view angle effects, and that the magnitude of the across-track reflectance gradient is independent of floristic composition in the non-inundated forests we studied. This makes a routine correction of view angle effects possible. However, floristic variation complicates the issue, because different forest types have different mean reflectances. This must be taken into account when deriving the correction function in order to avoid eliminating natural gradients.
76 FR 56263 - Titles II and XVI: Documenting and Evaluating Disability in Young Adults
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... carrying out simple instructions and work procedures during a school-sponsored work experience; Difficulty... instructions; Make simple work-related judgments typically required for unskilled work; Respond appropriately... ability to hear and understand simple oral messages, including instructions, and to communicate simple...
Right and left ventricular volumes in vitro by a new nongeometric method
NASA Technical Reports Server (NTRS)
Buckey, J. C.; Beattie, J. M.; Nixon, J. V.; Gaffney, F. A.; Blomqvist, C. G.
1987-01-01
We present an evaluation of a new nongeometric technique for calculating right and left ventricular volumes. This method calculates ventricular chamber volumes from multiple cross-sectional echocardiographic views taken from a single point as the echo beam is tilted progressively through the ventricle. Right and left ventricular volumes are calculated from both the approximate short axis and approximate apical position on 20 in vitro human hearts and compared with the actual chamber volumes. The results for both ventricles from both positions are excellent. Correlation coefficients are > 0.95 for all positions; the standard errors are in the range of 5 to 7 mL and the slopes and intercepts for the regression lines are not significantly different from 1 and 0, respectively (except for the left ventricular short-axis intercept). For all positions, approximately 6 to 8 views are needed for peak accuracy (7.5 degrees to 10 degrees separation). This approach offers several advantages. No geometric assumptions about ventricular shape are made. All images are acquired from a single point (or window), and the digitized points can be used to make a three-dimensional reconstruction of the ventricle. Also, during the calculations a volume distribution curve for the ventricle is produced. The shape of this curve can be characteristic for certain situations (ie, right ventricle, short axis) and can be used to make new simple equations for calculating volume. We conclude that this is an accurate nongeometric method for determining both right and left ventricular volumes in vitro.
Reinforcement learning and episodic memory in humans and animals: an integrative framework
Gershman, Samuel J.; Daw, Nathaniel D.
2018-01-01
We review the psychology and neuroscience of reinforcement learning (RL), which has witnessed significant progress in the last two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, the simplicity of these tasks misses important aspects of reinforcement learning in the real world: (i) State spaces are high-dimensional, continuous, and partially observable; this implies that (ii) data are relatively sparse: indeed precisely the same situation may never be encountered twice; and also that (iii) rewards depend on long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, these theories have largely connected with procedural and semantic memory: how knowledge about action values or world models extracted gradually from many experiences can drive choice. This misses many aspects of memory related to traces of individual events, such as episodic memory. We suggest that these two gaps are related. In particular, the computational challenges can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (i) efficiently approximate value functions over complex state spaces, (ii) learn with very little data, and (iii) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system. PMID:27618944
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A Mass Tracking Formulation for Bubbles in Incompressible Flow
2012-10-14
incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow
Importance and pitfalls of molecular analysis to parasite epidemiology.
Constantine, Clare C
2003-08-01
Molecular tools are increasingly being used to address questions about parasite epidemiology. Parasites represent a diverse group and they might not fit traditional population genetic models. Testing hypotheses depends equally on correct sampling, appropriate tool and/or marker choice, appropriate analysis and careful interpretation. All methods of analysis make assumptions which, if violated, make the results invalid. Some guidelines to avoid common pitfalls are offered here.
On the Treatment of Fixed and Sunk Costs in the Principles Textbooks
ERIC Educational Resources Information Center
Colander, David
2004-01-01
The author argues that, although the standard principles level treatment of fixed and sunk costs has problems, it is logically consistent as long as all fixed costs are assumed to be sunk costs. As long as the instructor makes that assumption clear to students, the costs of making the changes recently suggested by X. Henry Wang and Bill Z. Yang in…
Testing the Intelligence of Unmanned Autonomous Systems
2008-01-01
decisions without the operator. The term autonomous is also used interchangeably with intelligent, giving rise to the name unmanned autonomous system ( UAS ...For the purposes of this article, UAS describes an unmanned system that makes decisions based on gathered information. Because testers should not...make assumptions about the decision process within a UAS , there is a need for a methodology that completely tests this decision process without biasing
ERIC Educational Resources Information Center
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-01-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…
NASA Astrophysics Data System (ADS)
Takamatsu, k.; Tanaka, h.; Shoji, d.
2012-04-01
The Fukushima Daiichi nuclear disaster is a series of equipment failures and nuclear meltdowns, following the T¯o hoku earthquake and tsunami on 11 March 2011. We present a new method for visualizing nuclear reactors. Muon radiography based on the multiple Coulomb scattering of cosmic-ray muons has been performed. In this work, we discuss experimental results obtained with a cost-effective simple detection system assembled with three plastic scintillator strips. Actually, we counted the number of muons that were not largely deflected by restricting the zenith angle in one direction to 0.8o. The system could discriminate Fe, Pb and C. Materials lighter than Pb can be also discriminated with this system. This method only resolves the average material distribution along the muon path. Therefore the user must make assumptions or interpretations about the structure, or must use more than one detector to resolve the three dimensional material distribution. By applying this method to time-dependent muon radiography, we can detect changes with time, rendering the method suitable for real-time monitoring applications, possibly providing useful information about the reaction process in a nuclear reactor such as burnup of fuels. In nuclear power technology, burnup (also known as fuel utilization) is a measure of how much energy is extracted from a primary nuclear fuel source. Monitoring the burnup of fuels as a nondestructive inspection technique can contribute to safer operation. In nuclear reactor, the total mass is conserved so that the system cannot be monitored by conventional muon radiography. A plastic scintillator is relatively small and easy to setup compared to a gas or layered scintillation system. Thus, we think this simple radiographic method has the potential to visualize a core directly in cases of normal operations or meltdown accidents. Finally, we considered only three materials as a first step in this work. Further research is required to improve the ability of imaging the material distribution in a mass-conserved system.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Cognitive architectures, rationality, and next-generation AI: a prolegomenon
NASA Astrophysics Data System (ADS)
Bello, Paul; Bringsjord, Selmer; Yang, Yingrui
2004-08-01
Computational models that give us insight into the behavior of individuals and the organizations to which they belong will be invaluable assets in our nation's war against terrorists, and state sponsorship of terror organizations. Reasoning and decision-making are essential ingredients in the formula for human cognition, yet the two have almost exclusively been studied in isolation from one another. While we have witnessed the emergence of strong traditions in both symbolic logic, and decision theory, we have yet to describe an acceptable interface between the two. Mathematical formulations of decision-making and reasoning have been developed extensively, but both fields make assumptions concerning human rationality that are untenable at best. True to this tradition, artificial intelligence has developed architectures for intelligent agents under these same assumptions. While these digital models of "cognition" tend to perform superbly, given their tremendous capacity for calculation, it is hardly reasonable to develop simulacra of human performance using these techniques. We will discuss some the challenges associated with the problem of developing integrated cognitive systems for use in modelling, simulation, and analysis, along with some ideas for the future.
Authors' response: the primacy of conscious decision making.
Shanks, David R; Newell, Ben R
2014-02-01
The target article sought to question the common belief that our decisions are often biased by unconscious influences. While many commentators offer additional support for this perspective, others question our theoretical assumptions, empirical evaluations, and methodological criteria. We rebut in particular the starting assumption that all decision making is unconscious, and that the onus should be on researchers to prove conscious influences. Further evidence is evaluated in relation to the core topics we reviewed (multiple-cue judgment, deliberation without attention, and decisions under uncertainty), as well as priming effects. We reiterate a key conclusion from the target article, namely, that it now seems to be generally accepted that awareness should be operationally defined as reportable knowledge, and that such knowledge can only be evaluated by careful and thorough probing. We call for future research to pay heed to the different ways in which awareness can intervene in decision making (as identified in our lens model analysis) and to employ suitable methodology in the assessment of awareness, including the requirements that awareness assessment must be reliable, relevant, immediate, and sensitive.
Network meta-analysis: application and practice using Stata.
Shim, Sungryul; Yoon, Byung-Ho; Shin, In-Soo; Bae, Jong-Myon
2017-01-01
This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions-similarity, transitivity, and consistency-should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
48 CFR 52.211-9 - Desired and Required Time of Delivery.
Code of Federal Regulations, 2014 CFR
2014-10-01
... following clause: Desired and Required Time of Delivery (JUN 1997) (a) The Government desires delivery to be... or specific periods above are based on the assumption that the Government will make award by...
48 CFR 52.211-9 - Desired and Required Time of Delivery.
Code of Federal Regulations, 2011 CFR
2011-10-01
... following clause: Desired and Required Time of Delivery (JUN 1997) (a) The Government desires delivery to be... or specific periods above are based on the assumption that the Government will make award by...
48 CFR 52.211-9 - Desired and Required Time of Delivery.
Code of Federal Regulations, 2013 CFR
2013-10-01
... following clause: Desired and Required Time of Delivery (JUN 1997) (a) The Government desires delivery to be... or specific periods above are based on the assumption that the Government will make award by...
48 CFR 52.211-9 - Desired and Required Time of Delivery.
Code of Federal Regulations, 2012 CFR
2012-10-01
... following clause: Desired and Required Time of Delivery (JUN 1997) (a) The Government desires delivery to be... or specific periods above are based on the assumption that the Government will make award by...
48 CFR 52.211-9 - Desired and Required Time of Delivery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... following clause: Desired and Required Time of Delivery (JUN 1997) (a) The Government desires delivery to be... or specific periods above are based on the assumption that the Government will make award by...
Predictive Ecotoxicology in the 21st Century
Ecological risk assessments have long relied on apical data on survival, growth/development, and reproduction, generated in animal toxicity tests and the application of uncertainty factors and conservative (typically) assumptions as a basis for decision-making. However, advances ...
Behavioural social choice: a status report.
Regenwetter, Michel; Grofman, Bernard; Popova, Anna; Messner, William; Davis-Stober, Clintin P; Cavagnaro, Daniel R
2009-03-27
Behavioural social choice has been proposed as a social choice parallel to seminal developments in other decision sciences, such as behavioural decision theory, behavioural economics, behavioural finance and behavioural game theory. Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We highlight that important theoretical predictions in social choice theory change dramatically under even minute violations of standard assumptions. Empirical data violate those critical assumptions. We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also emphasize important insights for research on decision making by individuals. When researchers aggregate individual choice behaviour in laboratory experiments to report summary statistics, they are implicitly applying social choice rules. Thus, they should be aware of the potential for aggregation paradoxes. We hypothesize that such problems may substantially mar the conclusions of a number of (sometimes seminal) papers in behavioural decision research.
Equivalent air depth: fact or fiction.
Berghage, T E; McCraken, T M
1979-12-01
In mixed-gas diving theory, the equivalent air depth (EAD) concept suggests that oxygen does not contribute to the total tissue gas tension and can therefore be disregarded in calculations of the decompression process. The validity of this assumption has been experimentally tested by exposing 365 rats to various partial pressures of oxygen for various lengths of time. If the EAD assumption is correct, under a constant exposure pressure each incremental change in the oxygen partial pressure would produce a corresponding incremental change in pressure reduction tolerance. Results of this study suggest that the EAD concept does not adequately describe the decompression advantages obtained from breathing elevated oxygen partial pressures. The authors suggest that the effects of breathing oxygen vary in a nonlinear fashion across the range from anoxia to oxygen toxicity, and that a simple inert gas replacement concept is no longer tenable.
A simple approach to nonlinear estimation of physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.
People adopt optimal policies in simple decision-making, after practice and guidance.
Evans, Nathan J; Brown, Scott D
2017-04-01
Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
Missing data in FFQs: making assumptions about item non-response.
Lamb, Karen E; Olstad, Dana Lee; Nguyen, Cattram; Milte, Catherine; McNaughton, Sarah A
2017-04-01
FFQs are a popular method of capturing dietary information in epidemiological studies and may be used to derive dietary exposures such as nutrient intake or overall dietary patterns and diet quality. As FFQs can involve large numbers of questions, participants may fail to respond to all questions, leaving researchers to decide how to deal with missing data when deriving intake measures. The aim of the present commentary is to discuss the current practice for dealing with item non-response in FFQs and to propose a research agenda for reporting and handling missing data in FFQs. Single imputation techniques, such as zero imputation (assuming no consumption of the item) or mean imputation, are commonly used to deal with item non-response in FFQs. However, single imputation methods make strong assumptions about the missing data mechanism and do not reflect the uncertainty created by the missing data. This can lead to incorrect inference about associations between diet and health outcomes. Although the use of multiple imputation methods in epidemiology has increased, these have seldom been used in the field of nutritional epidemiology to address missing data in FFQs. We discuss methods for dealing with item non-response in FFQs, highlighting the assumptions made under each approach. Researchers analysing FFQs should ensure that missing data are handled appropriately and clearly report how missing data were treated in analyses. Simulation studies are required to enable systematic evaluation of the utility of various methods for handling item non-response in FFQs under different assumptions about the missing data mechanism.
User assumptions about information retrieval systems: Ethical concerns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froehlich, T.J.
Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less
Frisch, Stefan
2014-01-01
Three widespread assumptions of Cognitive-affective Neuroscience are discussed: first, mental functions are assumed to be localized in circumscribed brain areas which can be exactly determined, at least in principle (localizationism). Second, this assumption is associated with the more general claim that these functions (and dysfunctions, such as in neurological or mental diseases) are somehow generated inside the brain (internalism). Third, these functions are seen to be “biological” in the sense that they can be decomposed and finally explained on the basis of elementary biological causes (i.e., genetic, molecular, neurophysiological etc.), causes that can be identified by experimental methods as the gold standard (isolationism). Clinical neuropsychology is widely assumed to support these tenets. However, by making reference to the ideas of Kurt Goldstein (1878–1965), one of its most important founders, I argue that none of these assumptions is sufficiently supported. From the perspective of a clinical-neuropsychological practitioner, assessing and treating brain damage sequelae reveals a quite different picture of the brain as well as of us “brain carriers”, making the organism (or person) in its specific environment the crucial reference point. This conclusion can be further elaborated: all experimental and clinical research on humans presupposes the notion of a situated, reflecting, and interacting subject, which precedes all kinds of scientific decomposition, however useful. These implications support the core assumptions of the embodiment approach to brain and mind, and, as I argue, Goldstein and his clinical-neuropsychological observations are part of its very origin, for both theoretical and historical reasons. PMID:25100981
Variation is the universal: making cultural evolution work in developmental psychology.
Kline, Michelle Ann; Shamsudheen, Rubeena; Broesch, Tanya
2018-04-05
Culture is a human universal, yet it is a source of variation in human psychology, behaviour and development. Developmental researchers are now expanding the geographical scope of research to include populations beyond relatively wealthy Western communities. However, culture and context still play a secondary role in the theoretical grounding of developmental psychology research, far too often. In this paper, we highlight four false assumptions that are common in psychology, and that detract from the quality of both standard and cross-cultural research in development. These assumptions are: (i) the universality assumption , that empirical uniformity is evidence for universality, while any variation is evidence for culturally derived variation; (ii) the Western centrality assumption , that Western populations represent a normal and/or healthy standard against which development in all societies can be compared; (iii) the deficit assumption , that population-level differences in developmental timing or outcomes are necessarily due to something lacking among non-Western populations; and (iv) the equivalency assumption , that using identical research methods will necessarily produce equivalent and externally valid data, across disparate cultural contexts. For each assumption, we draw on cultural evolutionary theory to critique and replace the assumption with a theoretically grounded approach to culture in development. We support these suggestions with positive examples drawn from research in development. Finally, we conclude with a call for researchers to take reasonable steps towards more fully incorporating culture and context into studies of development, by expanding their participant pools in strategic ways. This will lead to a more inclusive and therefore more accurate description of human development.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
Reticulate evolution and the human past: an anthropological perspective.
Winder, Isabelle C; Winder, Nick P
2014-01-01
The evidence is mounting that reticulate (web-like) evolution has shaped the biological histories of many macroscopic plants and animals, including non-human primates closely related to Homo sapiens, but the implications of this non-hierarchical evolution for anthropological enquiry are not yet fully understood. When they are understood, the result may be a paradigm shift in evolutionary anthropology. This paper reviews the evidence for reticulated evolution in the non-human primates and human lineage. Then it makes the case for extrapolating this sort of patterning to Homo sapiens and other hominins and explores the implications this would have for research design, method and understandings of evolution in anthropology. Reticulation was significant in human evolutionary history and continues to influence societies today. Anthropologists and human scientists-whether working on ancient or modern populations-thus need to consider the implications of non-hierarchic evolution, particularly where molecular clocks, mathematical models and simplifying assumptions about evolutionary processes are used. This is not just a problem for palaeoanthropology. The simple fact of different mating systems among modern human groups, for example, may demand that more attention is paid to the potential for complexity in human genetic and cultural histories.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
Mannino, Robert G.; Myers, David R.; Ahn, Byungwook; Wang, Yichen; Margo Rollins; Gole, Hope; Lin, Angela S.; Guldberg, Robert E.; Giddens, Don P.; Timmins, Lucas H.; Lam, Wilbur A.
2015-01-01
Investigating biophysical cellular interactions in the circulation currently requires choosing between in vivo models, which are difficult to interpret due in part to the hemodynamic and geometric complexities of the vasculature; or in vitro systems, which suffer from non-physiologic assumptions and/or require specialized microfabrication facilities and expertise. To bridge that gap, we developed an in vitro “do-it-yourself” perfusable vasculature model that recapitulates in vivo geometries, such as aneurysms, stenoses, and bifurcations, and supports endothelial cell culture. These inexpensive, disposable devices can be created rapidly (<2 hours) with high precision and repeatability, using standard off-the-shelf laboratory supplies. Using these “endothelialized” systems, we demonstrate that spatial variation in vascular cell adhesion molecule (VCAM-1) expression correlates with the wall shear stress patterns of vascular geometries. We further observe that the presence of endothelial cells in stenoses reduces platelet adhesion but increases sickle cell disease (SCD) red blood cell (RBC) adhesion in bifurcations. Overall, our method enables researchers from all disciplines to study cellular interactions in physiologically relevant, yet simple-to-make, in vitro vasculature models. PMID:26202603
NASA Astrophysics Data System (ADS)
Rezaei Kh., S.; Bailer-Jones, C. A. L.; Hanson, R. J.; Fouesneau, M.
2017-02-01
We present a non-parametric model for inferring the three-dimensional (3D) distribution of dust density in the Milky Way. Our approach uses the extinction measured towards stars at different locations in the Galaxy at approximately known distances. Each extinction measurement is proportional to the integrated dust density along its line of sight (LoS). Making simple assumptions about the spatial correlation of the dust density, we can infer the most probable 3D distribution of dust across the entire observed region, including along sight lines which were not observed. This is possible because our model employs a Gaussian process to connect all LoS. We demonstrate the capability of our model to capture detailed dust density variations using mock data and simulated data from the Gaia Universe Model Snapshot. We then apply our method to a sample of giant stars observed by APOGEE and Kepler to construct a 3D dust map over a small region of the Galaxy. Owing to our smoothness constraint and its isotropy, we provide one of the first maps which does not show the "fingers of God" effect.
The Effects of Accretion Disk Geometry on AGN Reflection Spectra
NASA Astrophysics Data System (ADS)
Taylor, Corbin James; Reynolds, Christopher S.
2017-08-01
Despite being the gravitational engines that power galactic-scale winds and mega parsec-scale jets in active galaxies, black holes are remarkably simple objects, typically being fully described by their angular momenta (spin) and masses. The modelling of AGN X-ray reflection spectra has proven fruitful in estimating the spin of AGN, as well as giving insight into their accretion histories and the properties of plasmas in the strong gravity regime. However, current models make simplifying assumptions about the geometry of the reflecting material in the accretion disk and the irradiating X-ray corona, approximating the disk as an optically thick, infinitely thin disk of material in the orbital plane. We present results from the new relativistic raytracing suite, Fenrir, that explore the effects that disk thickness may have on the reflection spectrum and the accompanying reverberation signatures. Approximating the accretion disk as an optically thick, geometrically thin, radiation pressure dominated disk (Shakura & Sunyaev 1973), one finds that the disk geometry is non-negligible in many cases, with significant changes in the broad Fe K line profile. Finally, we explore the systematic errors inherent in approximating the disk as being infinitely thin when modeling reflection spectrum, potentially biasing determinations of black hole and corona properties.
Centrifugal Gas Compression Cycle
NASA Astrophysics Data System (ADS)
Fultun, Roy
2002-11-01
A centrifuged gas of kinetic, elastic hard spheres compresses isothermally and without flow of heat in a process that reverses free expansion. This theorem follows from stated assumptions via a collection of thought experiments, theorems and other supporting results, and it excludes application of the reversible mechanical adiabatic power law in this context. The existence of an isothermal adiabatic centrifugal compression process makes a three-process cycle possible using a fixed sample of the working gas. The three processes are: adiabatic mechanical expansion and cooling against a piston, isothermal adiabatic centrifugal compression back to the original volume, and isochoric temperature rise back to the original temperature due to an influx of heat. This cycle forms the basis for a Thomson perpetuum mobile that induces a loop of energy flow in an isolated system consisting of a heat bath connectable by a thermal path to the working gas, a mechanical extractor of the gas's internal energy, and a device that uses that mechanical energy and dissipates it as heat back into the heat bath. We present a simple experimental procedure to test the assertion that adiabatic centrifugal compression is isothermal. An energy budget for the cycle provides a criterion for breakeven in the conversion of heat to mechanical energy.
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Gluyas, Jon G.; GonzáLez MartíNez de Miguel, Gerardo J.; Hosseini, Seyyed A.
2011-12-01
This work extends an existing analytical solution for pressure buildup because of CO2 injection in brine aquifers by incorporating effects associated with partial miscibility. These include evaporation of water into the CO2 rich phase and dissolution of CO2 into brine and salt precipitation. The resulting equations are closed-form, including the locations of the associated leading and trailing shock fronts. Derivation of the analytical solution involves making a number of simplifying assumptions including: vertical pressure equilibrium, negligible capillary pressure, and constant fluid properties. The analytical solution is compared to results from TOUGH2 and found to accurately approximate the extent of the dry-out zone around the well, the resulting permeability enhancement due to residual brine evaporation, the volumetric saturation of precipitated salt, and the vertically averaged pressure distribution in both space and time for the four scenarios studied. While brine evaporation is found to have a considerable effect on pressure, the effect of CO2 dissolution is found to be small. The resulting equations remain simple to evaluate in spreadsheet software and represent a significant improvement on current methods for estimating pressure-limited CO2 storage capacity.
Quantum weak turbulence with applications to semiconductor lasers
NASA Astrophysics Data System (ADS)
Lvov, Yuri Victorovich
Based on a model Hamiltonian appropriate for the description of fermionic systems such as semiconductor lasers, we describe a natural asymptotic closure of the BBGKY hierarchy in complete analogy with that derived for classical weak turbulence. The main features of the interaction Hamiltonian are the inclusion of full Fermi statistics containing Pauli blocking and a simple, phenomenological, uniformly weak two particle interaction potential equivalent to the static screening approximation. The resulting asymytotic closure and quantum kinetic Boltzmann equation are derived in a self consistent manner without resorting to a priori statistical hypotheses or cumulant discard assumptions. We find a new class of solutions to the quantum kinetic equation which are analogous to the Kolmogorov spectra of hydrodynamics and classical weak turbulence. They involve finite fluxes of particles and energy across momentum space and are particularly relevant for describing the behavior of systems containing sources and sinks. We explore these solutions by using differential approximation to collision integral. We make a prima facie case that these finite flux solutions can be important in the context of semiconductor lasers. We show that semiconductor laser output efficiency can be improved by exciting these finite flux solutions. Numerical simulations of the semiconductor Maxwell Bloch equations support the claim.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
Managing a Common Pool Resource: Real Time Decision-Making in a Groundwater Aquifer
NASA Astrophysics Data System (ADS)
Sahu, R.; McLaughlin, D.
2017-12-01
In a Common Pool Resource (CPR) such as a groundwater aquifer, multiple landowners (agents) are competing for a limited resource of water. Landowners pump out the water to grow their own crops. Such problems can be posed as differential games, with agents all trying to control the behavior of the shared dynamic system. Each agent aims to maximize his/her own personal objective like agriculture yield, being aware that the action of every other agent collectively influences the behavior of the shared aquifer. The agents therefore choose a subgame perfect Nash equilibrium strategy that derives an optimal action for each agent based on the current state of the aquifer and assumes perfect information of every other agents' objective function. Furthermore, using an Iterated Best Response approach and interpolating techniques, an optimal pumping strategy can be computed for a more-realistic description of the groundwater model under certain assumptions. The numerical implementation of dynamic optimization techniques for a relevant description of the physical system yields results qualitatively different from the previous solutions obtained from simple abstractions.This work aims to bridge the gap between extensive modeling approaches in hydrology and competitive solution strategies in differential game theory.
The Effects of Accretion Disk Thickness on the Black Hole Reflection Spectrum
NASA Astrophysics Data System (ADS)
Taylor, Corbin; Reynolds, Christopher S.
2018-01-01
Despite being the gravitational engines that power galactic-scale winds and mega parsec-scale jets in active galaxies, black holes are remarkably simple objects, typically being fully described by their angular momenta (spin) and masses. The modelling of AGN X-ray reflection spectra has proven fruitful in estimating the spin of AGN, as well as giving insight into their accretion histories and into the properties of plasmas in the strong gravity regime. However, current models make simplifying assumptions about the geometry of the reflecting material in the accretion disk and the irradiating X-ray corona, approximating the disk as an optically thick, infinitely thin disk of material in the orbital plane. We present results from the new relativistic raytracing suite, Fenrir, that explore the effects that disk thickness may have on the reflection spectrum and the accompanying reverberation signatures. Approximating the accretion disk as an optically thick, geometrically thin, radiation pressure dominated disk (Shakura & Sunyaev 1973), one finds that the disk geometry is non-negligible in many cases, with significant changes in the broad Fe K line profile. Finally, we explore the systematic errors inherent in other contemporary models that approximate that disk as having negligible vertical extent.
The observable signature of late heating of the Universe during cosmic reionization.
Fialkov, Anastasia; Barkana, Rennan; Visbal, Eli
2014-02-13
Models and simulations of the epoch of reionization predict that spectra of the 21-centimetre transition of atomic hydrogen will show a clear fluctuation peak, at a redshift and scale, respectively, that mark the central stage of reionization and the characteristic size of ionized bubbles. This is based on the assumption that the cosmic gas was heated by stellar remnants-particularly X-ray binaries-to temperatures well above the cosmic microwave background at that time (about 30 kelvin). Here we show instead that the hard spectra (that is, spectra with more high-energy photons than low-energy photons) of X-ray binaries make such heating ineffective, resulting in a delayed and spatially uniform heating that modifies the 21-centimetre signature of reionization. Rather than looking for a simple rise and fall of the large-scale fluctuations (peaking at several millikelvin), we must expect a more complex signal also featuring a distinct minimum (at less than a millikelvin) that marks the rise of the cosmic mean gas temperature above the microwave background. Observing this signal, possibly with radio telescopes in operation today, will demonstrate the presence of a cosmic background of hard X-rays at that early time.
Pivovarov, Sergey
2009-04-01
This work presents a simple solution for the diffuse double layer model, applicable to calculation of surface speciation as well as to simulation of ionic adsorption within the diffuse layer of solution in arbitrary salt media. Based on Poisson-Boltzmann equation, the Gaines-Thomas selectivity coefficient for uni-bivalent exchange on clay, K(GT)(Me(2+)/M(+))=(Q(Me)(0.5)/Q(M)){M(+)}/{Me(2+)}(0.5), (Q is the equivalent fraction of cation in the exchange capacity, and {M(+)} and {Me(2+)} are the ionic activities in solution) may be calculated as [surface charge, mueq/m(2)]/0.61. The obtained solution of the Poisson-Boltzmann equation was applied to calculation of ionic exchange on clays and to simulation of the surface charge of ferrihydrite in 0.01-6 M NaCl solutions. In addition, a new model of acid-base properties was developed. This model is based on assumption that the net proton charge is not located on the mathematical surface plane but diffusely distributed within the subsurface layer of the lattice. It is shown that the obtained solution of the Poisson-Boltzmann equation makes such calculations possible, and that this approach is more efficient than the original diffuse double layer model.
Route choice in mountain navigation, Naismith's rule, and the equivalence of distance and climb.
Scarf, Philip
2007-04-01
In this paper, I consider decision making about routes in mountain navigation. In particular, I discuss Naismith's rule, a method of calculating journey times in mountainous terrain, and its use for route choice. The rule is essentially concerned with the equivalence, in terms of time duration, between climb or ascent and distance travelled. Naismith himself described a rule that is purported to be based on trigonometry and simple assumptions about rate of ascent; his rule with regard to hill-walking implies that 1 m of ascent is equivalent to 7.92 m of horizontal travel (1:7.92). The analysis of data on fell running records presented here supports Naismith's rule and it is recommended that male runners and walkers use a 1:8 equivalence ratio and females a 1:10 ratio. The present findings are contrasted with those based on the analysis of data relating to treadmill running experiments (1:3.3), and with those based on the analysis of times for a mountain road-relay (1:4.4). Analysis of cycling data suggests a similar rule (1:8.2) for cycling on mountainous roads and tracks.
Multi-Agent Market Modeling of Foreign Exchange Rates
NASA Astrophysics Data System (ADS)
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
Comprehensive analysis of Arabidopsis expression level polymorphisms with simple inheritance
Plantegenet, Stephanie; Weber, Johann; Goldstein, Darlene R; Zeller, Georg; Nussbaumer, Cindy; Thomas, Jérôme; Weigel, Detlef; Harshman, Keith; Hardtke, Christian S
2009-01-01
In Arabidopsis thaliana, gene expression level polymorphisms (ELPs) between natural accessions that exhibit simple, single locus inheritance are promising quantitative trait locus (QTL) candidates to explain phenotypic variability. It is assumed that such ELPs overwhelmingly represent regulatory element polymorphisms. However, comprehensive genome-wide analyses linking expression level, regulatory sequence and gene structure variation are missing, preventing definite verification of this assumption. Here, we analyzed ELPs observed between the Eil-0 and Lc-0 accessions. Compared with non-variable controls, 5′ regulatory sequence variation in the corresponding genes is indeed increased. However, ∼42% of all the ELP genes also carry major transcription unit deletions in one parent as revealed by genome tiling arrays, representing a >4-fold enrichment over controls. Within the subset of ELPs with simple inheritance, this proportion is even higher and deletions are generally more severe. Similar results were obtained from analyses of the Bay-0 and Sha accessions, using alternative technical approaches. Collectively, our results suggest that drastic structural changes are a major cause for ELPs with simple inheritance, corroborating experimentally observed indel preponderance in cloned Arabidopsis QTL. PMID:19225455