Naïve and Robust: Class-Conditional Independence in Human Classification Learning
ERIC Educational Resources Information Center
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.
2018-01-01
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
NASA Astrophysics Data System (ADS)
Riley, Douglas A.
We study the three-dimensional incompressible Navier- Stokes equations in a domain of the form W'×(0,e) . First, we assume W' is a C3 bounded domain and impose no-slip boundary conditions on 6W'×(0,e ) , and periodic conditions on W'×
Feature inference with uncertain categorization: Re-assessing Anderson's rational model.
Konovalova, Elizaveta; Le Mens, Gaël
2017-09-18
A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.
Independent Colimitation for Carbon Dioxide and Inorganic Phosphorus
Spijkerman, Elly; de Castro, Francisco; Gaedke, Ursula
2011-01-01
Simultaneous limitation of plant growth by two or more nutrients is increasingly acknowledged as a common phenomenon in nature, but its cellular mechanisms are far from understood. We investigated the uptake kinetics of CO2 and phosphorus of the algae Chlamydomonas acidophila in response to growth at limiting conditions of CO2 and phosphorus. In addition, we fitted the data to four different Monod-type models: one assuming Liebigs Law of the minimum, one assuming that the affinity for the uptake of one nutrient is not influenced by the supply of the other (independent colimitation) and two where the uptake affinity for one nutrient depends on the supply of the other (dependent colimitation). In addition we asked whether the physiological response under colimitation differs from that under single nutrient limitation. We found no negative correlation between the affinities for uptake of the two nutrients, thereby rejecting a dependent colimitation. Kinetic data were supported by a better model fit assuming independent uptake of colimiting nutrients than when assuming Liebigs Law of the minimum or a dependent colimitation. Results show that cell nutrient homeostasis regulated nutrient acquisition which resulted in a trade-off in the maximum uptake rates of CO2 and phosphorus, possibly driven by space limitation on the cell membrane for porters for the different nutrients. Hence, the response to colimitation deviated from that to a single nutrient limitation. In conclusion, responses to single nutrient limitation cannot be extrapolated to situations where multiple nutrients are limiting, which calls for colimitation experiments and models to properly predict growth responses to a changing natural environment. These deviations from single nutrient limitation response under colimiting conditions and independent colimitation may also hold for other nutrients in algae and in higher plants. PMID:22145031
Locally Dependent Latent Trait Model and the Dutch Identity Revisited.
ERIC Educational Resources Information Center
Ip, Edward H.
2002-01-01
Proposes a class of locally dependent latent trait models for responses to psychological and educational tests. Focuses on models based on a family of conditional distributions, or kernel, that describes joint multiple item responses as a function of student latent trait, not assuming conditional independence. Also proposes an EM algorithm for…
Conditional Independence in Applied Probability.
ERIC Educational Resources Information Center
Pfeiffer, Paul E.
This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…
Optimal control problems with mixed control-phase variable equality and inequality constraints
NASA Technical Reports Server (NTRS)
Makowski, K.; Neustad, L. W.
1974-01-01
In this paper, necessary conditions are obtained for optimal control problems containing equality constraints defined in terms of functions of the control and phase variables. The control system is assumed to be characterized by an ordinary differential equation, and more conventional constraints, including phase inequality constraints, are also assumed to be present. Because the first-mentioned equality constraint must be satisfied for all t (the independent variable of the differential equation) belonging to an arbitrary (prescribed) measurable set, this problem gives rise to infinite-dimensional equality constraints. To obtain the necessary conditions, which are in the form of a maximum principle, an implicit-function-type theorem in Banach spaces is derived.
Sparse covariance estimation in heterogeneous samples*
Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian
2015-01-01
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189
The magnetosphere of Neptune - Its response to daily rotation
NASA Technical Reports Server (NTRS)
Voigt, Gerd-Hannes; Ness, Norman F.
1990-01-01
The Neptunian magnetosphere periodically changes every eight hours between a pole-on magnetosphere with only one polar cusp and an earth-type magnetosphere with two polar cusps. In the pole-on configuration, the tail current sheet has an almost circular shape with plasma currents closing entirely within the magnetosphere. Eight hours later the tail current sheet assumes an almost flat shape with plasma currents touching the magnetotail boundary and closing over the tail magnetopause. Magnetic field and tail current sheet configurations have been calculated in a three-dimensional model, but the plasma- and thermodynamic conditions were investigated in a simplified two-dimensional MHD equilibrium magnetosphere. It was found that the free energy in the tail region of the two-dimensional model becomes independent of the dipole tilt angle. It is conjectured that the Neptunian magnetotail might assume quasi-static equilibrium states that make the free energy of the system independent of its daily rotation.
Anselmi, Pasquale; Stefanutti, Luca; de Chiusole, Debora; Robusto, Egidio
2017-11-01
The gain-loss model (GaLoM) is a formal model for assessing knowledge and learning. In its original formulation, the GaLoM assumes independence among the skills. Such an assumption is not reasonable in several domains, in which some preliminary knowledge is the foundation for other knowledge. This paper presents an extension of the GaLoM to the case in which the skills are not independent, and the dependence relation among them is described by a well-graded competence space. The probability of mastering skill s at the pretest is conditional on the presence of all skills on which s depends. The probabilities of gaining or losing skill s when moving from pretest to posttest are conditional on the mastery of s at the pretest, and on the presence at the posttest of all skills on which s depends. Two formulations of the model are presented, in which the learning path is allowed to change from pretest to posttest or not. A simulation study shows that models based on the true competence space obtain a better fit than models based on false competence spaces, and are also characterized by a higher assessment accuracy. An empirical application shows that models based on pedagogically sound assumptions about the dependencies among the skills obtain a better fit than models assuming independence among the skills. © 2017 The British Psychological Society.
Conflict over condition-dependent sex allocation can lead to mixed sex-determination systems
Kuijper, Bram; Pen, Ido
2014-01-01
Theory suggests that genetic conflicts drive turnovers between sex-determining mechanisms, yet these studies only apply to cases where sex allocation is independent of environment or condition. Here, we model parent–offspring conflict in the presence of condition-dependent sex allocation, where the environment has sex-specific fitness consequences. Additionally, one sex is assumed to be more costly to produce than the other, which leads offspring to favor a sex ratio less biased toward the cheaper sex in comparison to the sex ratio favored by mothers. The scope for parent–offspring conflict depends on the relative frequency of both environments: when one environment is less common than the other, parent–offspring conflict can be reduced or even entirely absent, despite a biased population sex ratio. The model shows that conflict-driven invasions of condition-independent sex factors (e.g., sex chromosomes) result either in the loss of condition-dependent sex allocation, or, interestingly, lead to stable mixtures of condition-dependent and condition-independent sex factors. The latter outcome corresponds to empirical observations in which sex chromosomes are present in organisms with environment-dependent sex determination. Finally, conflict can also favor errors in environmental perception, potentially resulting in the loss of condition-dependent sex allocation without genetic changes to sex-determining loci. PMID:25180669
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-10-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem.
ERIC Educational Resources Information Center
Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.
2015-01-01
The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
Palatini wormholes and energy conditions from the prism of general relativity.
Bejarano, Cecilia; Lobo, Francisco S N; Olmo, Gonzalo J; Rubiera-Garcia, Diego
2017-01-01
Wormholes are hypothetical shortcuts in spacetime that in general relativity unavoidably violate all of the pointwise energy conditions. In this paper, we consider several wormhole spacetimes that, as opposed to the standard designer procedure frequently employed in the literature, arise directly from gravitational actions including additional terms resulting from contractions of the Ricci tensor with the metric, and which are formulated assuming independence between metric and connection (Palatini approach). We reinterpret such wormhole solutions under the prism of General Relativity and study the matter sources that thread them. We discuss the size of violation of the energy conditions in different cases and how this is related to the same spacetimes when viewed from the modified gravity side.
Repetitive pulses and laser-induced retinal injury thresholds
NASA Astrophysics Data System (ADS)
Lund, David J.
2007-02-01
Experimental studies with repetitively pulsed lasers show that the ED 50, expressed as energy per pulse, varies as the inverse fourth power of the number of pulses in the exposure, relatively independently of the wavelength, pulse duration, or pulse repetition frequency of the laser. Models based on a thermal damage mechanism cannot readily explain this result. Menendez et al. proposed a probability-summation model for predicting the threshold for a train of pulses based on the probit statistics for a single pulse. The model assumed that each pulse is an independent trial, unaffected by any other pulse in the train of pulses and assumes that the probability of damage for a single pulse is adequately described by the logistic curve. The requirement that the effect of each pulse in the pulse train be unaffected by the effects of other pulses in the train is a showstopper when the end effect is viewed as a thermal effect with each pulse in the train contributing to the end temperature of the target tissue. There is evidence that the induction of cell death by microcavitation bubbles around melanin granules heated by incident laser irradiation can satisfy the condition of pulse independence as required by the probability summation model. This paper will summarize the experimental data and discuss the relevance of the probability summation model given microcavitation as a damage mechanism.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
NASA Technical Reports Server (NTRS)
Emanuel, George
1989-01-01
A variety of related scramjet engine topics are examined. The flow is assumed to be 1-D, the gas is thermally and calorically perfect, and focus is on low hypersonic Mach numbers. The thrust and lift of an exposed half nozzle, which is used on the aerospace plane, is evaluated as well as a fully confined nozzle. A rough estimate of the drag of an aerospace plane is provided. Thermal effects and shock waves are next discussed. A parametric scramjet model is then presented based on the influence coefficient method, which evaluates the dominant scramjet processes. The independent parameters are the ratio of specific heats, a nondimensional heat addition parameter, and four Mach numbers. The total thrust generated by the combustor and nozzle is shown to be independent of the heat release distribution and the combustor exit Mach number, providing thermal choking is avoided. An operating condition for the combustor is found that maximizes the thrust. An alternative condition is explored when this optimum is no longer realistic. This condition provides a favorable pressure gradient and a reasonable area ratio for the combustor. Parametric results based on the model is provided.
Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan; De Boeck, Paul
2017-01-01
With the widespread use of computerized tests in educational measurement and cognitive psychology, registration of response times has become feasible in many applications. Considering these response times helps provide a more complete picture of the performance and characteristics of persons beyond what is available based on response accuracy alone. Statistical models such as the hierarchical model (van der Linden, 2007) have been proposed that jointly model response time and accuracy. However, these models make restrictive assumptions about the response processes (RPs) that may not be realistic in practice, such as the assumption that the association between response time and accuracy is fully explained by taking speed and ability into account (conditional independence). Assuming conditional independence forces one to ignore that many relevant individual differences may play a role in the RPs beyond overall speed and ability. In this paper, we critically consider the assumption of conditional independence and the important ways in which it may be violated in practice from a substantive perspective. We consider both conditional dependences that may arise when all persons attempt to solve the items in similar ways (homogeneous RPs) and those that may be due to persons differing in fundamental ways in how they deal with the items (heterogeneous processes). The paper provides an overview of what we can learn from observed conditional dependences. We argue that explaining and modeling these differences in the RPs is crucial to increase both the validity of measurement and our understanding of the relevant RPs. PMID:28261136
Estimating the impact of grouping misclassification on risk ...
Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals and those that assume response addition are used for groups of independently acting chemicals. Grouping criteria for similarity can include a common adverse outcome pathway (AOP) and similarly shaped dose-response curves, with the latter used in the relative potency factor (RPF) method for estimating mixture response. Independence of toxic action is generally assumed if there is evidence that the chemicals act by different mechanisms. Several questions arise about the potential for misclassification error in the mixture risk prediction. If a common AOP has been established, how much error could there be if the same dose-response curve shape is assumed for all chemicals, when the shapes truly differ and, conversely, what is the error potential if different shapes are assumed when they are not? In particular, how do those concerns impact the choice of index chemical and uncertainty of the RPF-estimated mixture response? What is the quantitative impact if dose additivity is assumed when complete or partial independence actually holds and vice versa? These concepts and implications will be presented with numerical examples in the context of uncertainty of the RPF-estimated mixture response,
Calculating tracer currents through narrow ion channels: Beyond the independent particle model.
Coalson, Rob D; Jasnow, David
2018-06-01
Discrete state models of single-file ion permeation through a narrow ion channel pore are employed to analyze the ratio of forward to backward tracer current. Conditions under which the well-known Ussing formula for this ratio hold are explored in systems where ions do not move independently through the channel. Building detailed balance into the rate constants for the model in such a way that under equilibrium conditions (equal rate of forward vs. backward permeation events) the Nernst Equation is satisfied, it is found that in a model where only one ion can occupy the channel at a time, the Ussing formula is always obeyed for any number of binding sites, reservoir concentrations of the ions and electric potential difference across the membrane which the ion channel spans, independent of the internal details of the permeation pathway. However, numerical analysis demonstrates that when multiple ions can occupy the channel at once, the nonequilibrium forward/backward tracer flux ratio deviates from the prediction of the Ussing model. Assuming an appropriate effective potential experienced by ions in the channel, we provide explicit formulae for the rate constants in these models. © 2018 IOP Publishing Ltd.
Node-Based Learning of Multiple Gaussian Graphical Models
Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In
2014-01-01
We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137
ANSYS Modeling of Hydrostatic Stress Effects
NASA Technical Reports Server (NTRS)
Allen, Phillip A.
1999-01-01
Classical metal plasticity theory assumes that hydrostatic pressure has no effect on the yield and postyield behavior of metals. Plasticity textbooks, from the earliest to the most modem, infer that there is no hydrostatic effect on the yielding of metals, and even modem finite element programs direct the user to assume the same. The object of this study is to use the von Mises and Drucker-Prager failure theory constitutive models in the finite element program ANSYS to see how well they model conditions of varying hydrostatic pressure. Data is presented for notched round bar (NRB) and "L" shaped tensile specimens. Similar results from finite element models in ABAQUS are shown for comparison. It is shown that when dealing with geometries having a high hydrostatic stress influence, constitutive models that have a functional dependence on hydrostatic stress are more accurate in predicting material behavior than those that are independent of hydrostatic stress.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
Estimating residual fault hitting rates by recapture sampling
NASA Technical Reports Server (NTRS)
Lee, Larry; Gupta, Rajan
1988-01-01
For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.
ERIC Educational Resources Information Center
Upah-Bant, Marilyn
1978-01-01
Describes the over-all business and production operation of the "Daily Illini" at the University of Illinois to show how this college publication has assumed the burdens and responsibilities of true independence. (GW)
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1985-01-01
After detailing the construction of spectral approximations to time-dependent mixed initial boundary value problems, a study is conducted of differential equations of the form 'partial derivative of u/partial derivative of t = Lu + f', where for each t, u(t) belongs to a Hilbert space such that u satisfies homogeneous boundary conditions. For the sake of simplicity, it is assumed that L is an unbounded, time-independent linear operator. Attention is given to Fourier methods of both Galerkin and pseudospectral method types, the Galerkin method, the pseudospectral Chebyshev and Legendre methods, the error equation, hyperbolic partial differentiation equations, and time discretization and iterative methods.
Off-axis impact of unidirectional composites with cracks: Dynamic stress intensification
NASA Technical Reports Server (NTRS)
Sih, G. C.; Chen, E. P.
1979-01-01
The dynamic response of unidirectional composites under off axis (angle loading) impact is analyzed by assuming that the composite contains an initial flaw in the matrix material. The analytical method utilizes Fourier transform for the space variable and Laplace transform for the time variable. The off axis impact is separated into two parts, one being symmetric and the other skew-symmetric with reference to the crack plane. Transient boundary conditions of normal and shear tractions are applied to a crack embedded in the matrix of the unidirectional composite. The two boundary conditions are solved independently and the results superimposed. Mathematically, these conditions reduce the problem to a system of dual integral equations which are solved in the Laplace transform plane for the transformation of the dynamic stress intensity factor. The time inversion is carried out numerically for various combinations of the material properties of the composite and the results are displayed graphically.
Core conditions for alpha heating attained in direct-drive inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, A.; Woo, K. M.; Betti, R.
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less
Core conditions for alpha heating attained in direct-drive inertial confinement fusion
Bose, A.; Woo, K. M.; Betti, R.; ...
2016-07-07
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less
Core conditions for alpha heating attained in direct-drive inertial confinement fusion.
Bose, A; Woo, K M; Betti, R; Campbell, E M; Mangino, D; Christopherson, A R; McCrory, R L; Nora, R; Regan, S P; Goncharov, V N; Sangster, T C; Forrest, C J; Frenje, J; Gatu Johnson, M; Glebov, V Yu; Knauer, J P; Marshall, F J; Stoeckl, C; Theobald, W
2016-07-01
It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)10.1103/PhysRevLett.117.025001] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.
AN INQUIRY INTO INDEPENDENT STUDY.
ERIC Educational Resources Information Center
SMITH, JANET
INDEPENDENT STUDY PROGRAMS DEVELOPED FROM THE CONCEPTION THAT THE STUDENT SHOULD BE ENCOURAGED TO INITIATE INQUIRY AND TO ASSUME GREATER SELF-RELIANCE IN THE LEARNING PROCESS. THE TEAM TEACHING SETUP WAS USED. SEVERAL FACULTY MEMBERS WORKED WITH INDIVIDUAL STUDENTS IN TUTORIAL PROGRAMS. INDEPENDENT STUDY AREAS, FURNISHED WITH CARRELS AND TABLES,…
Uncertainty quantification for accident management using ACE surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.
The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
The distribution of individual cabinet positions in coalition governments: A sequential approach
Meyer, Thomas M.; Müller, Wolfgang C.
2015-01-01
Abstract Multiparty government in parliamentary democracies entails bargaining over the payoffs of government participation, in particular the allocation of cabinet positions. While most of the literature deals with the numerical distribution of cabinet seats among government parties, this article explores the distribution of individual portfolios. It argues that coalition negotiations are sequential choice processes that begin with the allocation of those portfolios most important to the bargaining parties. This induces conditionality in the bargaining process as choices of individual cabinet positions are not independent of each other. Linking this sequential logic with party preferences for individual cabinet positions, the authors of the article study the allocation of individual portfolios for 146 coalition governments in Western and Central Eastern Europe. The results suggest that a sequential logic in the bargaining process results in better predictions than assuming mutual independence in the distribution of individual portfolios. PMID:27546952
Are atmospheric surface layer flows ergodic?
NASA Astrophysics Data System (ADS)
Higgins, Chad W.; Katul, Gabriel G.; Froidevaux, Martin; Simeonov, Valentin; Parlange, Marc B.
2013-06-01
The transposition of atmospheric turbulence statistics from the time domain, as conventionally sampled in field experiments, is explained by the so-called ergodic hypothesis. In micrometeorology, this hypothesis assumes that the time average of a measured flow variable represents an ensemble of independent realizations from similar meteorological states and boundary conditions. That is, the averaging duration must be sufficiently long to include a large number of independent realizations of the sampled flow variable so as to represent the ensemble. While the validity of the ergodic hypothesis for turbulence has been confirmed in laboratory experiments, and numerical simulations for idealized conditions, evidence for its validity in the atmospheric surface layer (ASL), especially for nonideal conditions, continues to defy experimental efforts. There is some urgency to make progress on this problem given the proliferation of tall tower scalar concentration networks aimed at constraining climate models yet are impacted by nonideal conditions at the land surface. Recent advancements in water vapor concentration lidar measurements that simultaneously sample spatial and temporal series in the ASL are used to investigate the validity of the ergodic hypothesis for the first time. It is shown that ergodicity is valid in a strict sense above uniform surfaces away from abrupt surface transitions. Surprisingly, ergodicity may be used to infer the ensemble concentration statistics of a composite grass-lake system using only water vapor concentration measurements collected above the sharp transition delineating the lake from the grass surface.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz
2015-01-01
Many stimuli evoke short-term emotional reactions. These reactions may play an important role in assessing how a subject perceives a stimulus. Additionally, long-term mood may modulate the emotional reactions but it is still unclear in what way. The question seems to be important in terms of animal welfare, as a negative mood may taint emotional reactions. In the present study with sheep, we investigated the effects of thermal stimuli on emotional reactions and the potential modulating effect of mood induced by manipulations of the housing conditions. We assume that unpredictable, stimulus-poor conditions lead to a negative and predictable, stimulus-rich conditions to a positive mood state. The thermal stimuli were applied to the upper breast during warm ambient temperatures: hot (as presumably negative), intermediate, and cold (as presumably positive). We recorded cortical activity by functional near-infrared spectroscopy, restlessness behavior (e.g., locomotor activity, aversive behaviors), and ear postures as indicators of emotional reactions. The strongest hemodynamic reaction was found during a stimulus of intermediate valence independent of the animal’s housing conditions, whereas locomotor activity, ear movements, and aversive behaviors were seen most in sheep from the unpredictable, stimulus-poor housing conditions, independent of stimulus valence. We conclude that, sheep perceived the thermal stimuli and differentiated between some of them. An adequate interpretation of the neuronal activity pattern remains difficult, though. The effects of housing conditions were small indicating that the induction of mood was only modestly efficacious. Therefore, a modulating effect of mood on the emotional reaction was not found. PMID:26664938
Coactivation of Gustatory and Olfactory Signals in Flavor Perception
Veldhuizen, Maria G.; Shepard, Timothy G.; Wang, Miao-Fen
2010-01-01
It is easier to detect mixtures of gustatory and olfactory flavorants than to detect either component alone. But does the detection of mixtures exceed the level predicted by probability summation, assuming independent detection of each component? To answer this question, we measured simple response times (RTs) to detect brief pulses of one of 3 flavorants (sucrose [gustatory], citral [olfactory], sucrose–citral mixture) or water, presented into the mouth by a computer-operated, automated flow system. Subjects were instructed to press a button as soon as they detected any of the 3 nonwater stimuli. Responses to the mixtures were faster (RTs smaller) than predicted by a model of probability summation of independently detected signals, suggesting positive coactivation (integration) of gustation and retronasal olfaction in flavor perception. Evidence for integration appeared mainly in the fastest 60% of the responses, indicating that integration arises relatively early in flavor processing. Results were similar when the 3 possible flavorants, and water, were interleaved within the same session (experimental condition), and when each flavorant was interleaved with water only (control conditions). This outcome suggests that subjects did not attend selectively to one flavor component or the other in the experimental condition and further supports the conclusion that (late) decisional or attentional strategies do not exert a large influence on the gustatory–olfactory flavor integration. PMID:20032112
Phase-Reference-Free Experiment of Measurement-Device-Independent Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Wang, Chao; Song, Xiao-Tian; Yin, Zhen-Qiang; Wang, Shuang; Chen, Wei; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu
2015-10-01
Measurement-device-independent quantum key distribution (MDI QKD) is a substantial step toward practical information-theoretic security for key sharing between remote legitimate users (Alice and Bob). As with other standard device-dependent quantum key distribution protocols, such as BB84, MDI QKD assumes that the reference frames have been shared between Alice and Bob. In practice, a nontrivial alignment procedure is often necessary, which requires system resources and may significantly reduce the secure key generation rate. Here, we propose a phase-coding reference-frame-independent MDI QKD scheme that requires no phase alignment between the interferometers of two distant legitimate parties. As a demonstration, a proof-of-principle experiment using Faraday-Michelson interferometers is presented. The experimental system worked at 1 MHz, and an average secure key rate of 8.309 bps was obtained at a fiber length of 20 km between Alice and Bob. The system can maintain a positive key generation rate without phase compensation under normal conditions. The results exhibit the feasibility of our system for use in mature MDI QKD devices and its value for network scenarios.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
One hundred years of return period: Strengths and limitations
NASA Astrophysics Data System (ADS)
Volpi, E.; Fiori, A.; Grimaldi, S.; Lombardo, F.; Koutsoyiannis, D.
2015-10-01
One hundred years from its original definition by Fuller, the probabilistic concept of return period is widely used in hydrology as well as in other disciplines of geosciences to give an indication on critical event rareness. This concept gains its popularity, especially in engineering practice for design and risk assessment, due to its ease of use and understanding; however, return period relies on some basic assumptions that should be satisfied for a correct application of this statistical tool. Indeed, conventional frequency analysis in hydrology is performed by assuming as necessary conditions that extreme events arise from a stationary distribution and are independent of one another. The main objective of this paper is to investigate the properties of return period when the independence condition is omitted; hence, we explore how the different definitions of return period available in literature affect results of frequency analysis for processes correlated in time. We demonstrate that, for stationary processes, the independence condition is not necessary in order to apply the classical equation of return period (i.e., the inverse of exceedance probability). On the other hand, we show that the time-correlation structure of hydrological processes modifies the shape of the distribution function of which the return period represents the first moment. This implies that, in the context of time-dependent processes, the return period might not represent an exhaustive measure of the probability of failure, and that its blind application could lead to misleading results. To overcome this problem, we introduce the concept of Equivalent Return Period, which controls the probability of failure still preserving the virtue of effectively communicating the event rareness.
New Variational Formulations of Hybrid Stress Elements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.; Sumihara, K.; Kang, D.
1984-01-01
In the variational formulations of finite elements by the Hu-Washizu and Hellinger-Reissner principles the stress equilibrium condition is maintained by the inclusion of internal displacements which function as the Lagrange multipliers for the constraints. These versions permit the use of natural coordinates and the relaxation of the equilibrium conditions and render considerable improvements in the assumed stress hybrid elements. These include the derivation of invariant hybrid elements which possess the ideal qualities such as minimum sensitivity to geometric distortions, minimum number of independent stress parameters, rank sufficient, and ability to represent constant strain states and bending moments. Another application is the formulation of semiLoof thin shell elements which can yield excellent results for many severe test cases because the rigid body nodes, the momentless membrane strains, and the inextensional bending modes are all represented.
van der Star, Sanne M; van den Berg, Bernard
2011-08-01
This study analyzes peoples' social preferences for individual responsibility to health-risk behaviour in health care using the contingent valuation method adopting a societal perspective. We measure peoples' willingness to pay for inclusion of a treatment in basic health insurance of a hypothetical lifestyle dependent (smoking) and lifestyle independent (chronic) health problem. Our hypothesis is that peoples' willingness to pay for the independent and the dependent health problems are similar. As a methodological challenge, this study also analyzes the extent to which people consider their personal situation when answering contingent valuation questions adopting a societal perspective. 513 Dutch inhabitants responded to the questionnaire. They were asked to state their maximum willingness to pay for inclusion of treatments in basic health insurance package for two health problems. We asked them to assume that one hypothetical health problem was totally independent of behaviour (for simplicity called chronic disease). Alternatively, we asked them to assume that the other hypothetical health problem was totally caused by health-risk behaviour (for simplicity called smoking disease). We applied the payment card method to guide respondents to answer the contingent valuation method questions. Mean willingness to pay was 42.39 Euros (CI=37.24-47.55) for inclusion of treatment for health problem that was unrelated to behaviour, with '5-10' and '10-20 Euros' as most frequently stated answers. In contrast, mean willingness to pay for inclusion treatment for health-risk related problem was 11.29 Euros (CI=8.83-14.55), with '0' and '0-5 Euros' as most frequently provided answers. Difference in mean willingness to pay was substantial (over 30 Euros) and statistically significant (p-value=0.000). Smokers were statistically significantly more (p-value<0.01) willing to pay for the health-risk related (smoking) problem compared with non-smokers, while people with chronic condition were not willing to pay more for the health-risk unrelated (chronic) problem than people without chronic condition. This suggests that sub groups of people might differ in terms of abstracting from their personal situation when answering valuation questions from a societal perspective. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Kerr, Robert R.; Grayden, David B.; Thomas, Doreen A.; Gilson, Matthieu; Burkitt, Anthony N.
2014-01-01
A fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments. PMID:24475240
NASA Astrophysics Data System (ADS)
Arfi, Badredine
2007-02-01
Most game-theoretic studies of strategic interaction assume independent individual strategies as the basic unit of analysis. This paper explores the effects of non-independence on strategic interaction. Two types of non-independence effects are considered. First, the paper considers subjective non-independence at the level of the individual actor by looking at how choice ambivalence shapes the decision-making process. Specifically, how do alternative individual choices superpose with one another to “constructively/destructively” shape each other's role within an actor's decision-making process? This process is termed as quantum superposition of alternative choices. Second, the paper considers how inter-subjective non-independence across actors engenders collective strategies among two or more interacting actors. This is termed as quantum entanglement of strategies. Taking into account both types of non-independence effect makes possible the emergence of a new collective equilibrium, without assuming signaling, prior “contract” agreement or third-party moderation, or even “cheap talk”. I apply these ideas to analyze the equilibrium possibilities of a situation wherein N actors play a quantum social game of cooperation. I consider different configurations of large- N quantum entanglement using the approach of density operator. I specifically consider the following configurations: star-shaped, nearest-neighbors, and full entanglement.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Anandhakumar, Jayamani; Moustafa, Yara W.; Chowdhary, Surabhi; Kainth, Amoldeep S.
2016-01-01
Mediator is an evolutionarily conserved coactivator complex essential for RNA polymerase II transcription. Although it has been generally assumed that in Saccharomyces cerevisiae, Mediator is a stable trimodular complex, its structural state in vivo remains unclear. Using the “anchor away” (AA) technique to conditionally deplete select subunits within Mediator and its reversibly associated Cdk8 kinase module (CKM), we provide evidence that Mediator's tail module is highly dynamic and that a subcomplex consisting of Med2, Med3, and Med15 can be independently recruited to the regulatory regions of heat shock factor 1 (Hsf1)-activated genes. Fluorescence microscopy of a scaffold subunit (Med14)-anchored strain confirmed parallel cytoplasmic sequestration of core subunits located outside the tail triad. In addition, and contrary to current models, we provide evidence that Hsf1 can recruit the CKM independently of core Mediator and that core Mediator has a role in regulating postinitiation events. Collectively, our results suggest that yeast Mediator is not monolithic but potentially has a dynamic complexity heretofore unappreciated. Multiple species, including CKM-Mediator, the 21-subunit core complex, the Med2-Med3-Med15 tail triad, and the four-subunit CKM, can be independently recruited by activated Hsf1 to its target genes in AA strains. PMID:27185874
Quantifying Wrinkle Features of Thin Membrane Structures
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.
2004-01-01
For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.
Relaxation Dynamics of a Granular Pile on a Vertically Vibrating Plate
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Otsuki, Michio; Katsuragi, Hiroaki
2018-03-01
Nonlinear relaxation dynamics of a vertically vibrated granular pile is experimentally studied. In the experiment, the flux and slope on the relaxing pile are measured by using a high-speed laser profiler. The relation of these quantities can be modeled by the nonlinear transport law assuming the uniform vibrofluidization of an entire pile. The fitting parameter in this model is only the relaxation efficiency, which characterizes the energy conversion rate from vertical vibration into horizontal transport. We demonstrate that this value is a constant independent of experimental conditions. The actual relaxation is successfully reproduced by the continuity equation with the proposed model. Finally, its specific applicability toward an astrophysical phenomenon is shown.
Aviation fuel property effects on altitude relight
NASA Technical Reports Server (NTRS)
Venkataramani, K.
1987-01-01
The major objective of this experimental program was to investigate the effects of fuel property variation on altitude relight characteristics. Four fuels with widely varying volatility properties (JP-4, Jet A, a blend of Jet A and 2040 Solvent, and Diesel 2) were tested in a five-swirl-cup-sector combustor at inlet temperatures and flows representative of windmilling conditions of turbofan engines. The effects of fuel physical properties on atomization were eliminated by using four sets of pressure-atomizing nozzles designed to give the same spray Sauter mean diameter (50 + or - 10 micron) for each fuel at the same design fuel flow. A second series of tests was run with a set of air-blast nozzles. With comparable atomization levels, fuel volatility assumes only a secondary role for first-swirl-cup lightoff and complete blowout. Full propagation first-cup blowout were independent of fuel volatility and depended only on the combustor operating conditions.
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
Two-dimensional strain gradient damage modeling: a variational approach
NASA Astrophysics Data System (ADS)
Placidi, Luca; Misra, Anil; Barchiesi, Emilio
2018-06-01
In this paper, we formulate a linear elastic second gradient isotropic two-dimensional continuum model accounting for irreversible damage. The failure is defined as the condition in which the damage parameter reaches 1, at least in one point of the domain. The quasi-static approximation is done, i.e., the kinetic energy is assumed to be negligible. In order to deal with dissipation, a damage dissipation term is considered in the deformation energy functional. The key goal of this paper is to apply a non-standard variational procedure to exploit the damage irreversibility argument. As a result, we derive not only the equilibrium equations but, notably, also the Karush-Kuhn-Tucker conditions. Finally, numerical simulations for exemplary problems are discussed as some constitutive parameters are varying, with the inclusion of a mesh-independence evidence. Element-free Galerkin method and moving least square shape functions have been employed.
Face (and Nose) Priming for Book: The Malleability of Semantic Memory
Coane, Jennifer H.; Balota, David A.
2010-01-01
There are two general classes of models of semantic structure that support semantic priming effects. Feature-overlap models of semantic priming assume that shared features between primes and targets are critical (e.g., cat-DOG). Associative accounts assume that contextual co-occurrence is critical and that the system is organized along associations independent of featural overlap (e.g., leash-DOG). If unrelated concepts can become related as a result of contextual co-occurrence, this would be more supportive of associative accounts and provide insight into the nature of the network underlying “semantic” priming effects. Naturally co-occurring recent associations (e.g., face-BOOK) were tested under conditions that minimize strategic influences (i.e., short stimulus onset asynchrony, low relatedness proportion) in a semantic priming paradigm. Priming for new associations did not differ from the priming found for pre-existing relations (e.g., library-BOOK). Mediated priming (e.g., nose-BOOK) was also found. These results suggest that contextual associations can result in the reorganization of the network that subserves “semantic” priming effects. PMID:20494866
Independent Events in Elementary Probability Theory
ERIC Educational Resources Information Center
Csenki, Attila
2011-01-01
In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…
14 CFR 27.481 - Tail-down landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Tail-down landing conditions. (a) The rotorcraft is assumed to be in the maximum nose-up attitude allowing ground clearance by each part of the rotorcraft. (b) In this attitude, ground loads are assumed to...
14 CFR 29.481 - Tail-down landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Tail-down landing conditions. (a) The rotorcraft is assumed to be in the maximum nose-up attitude allowing ground clearance by each part of the rotorcraft. (b) In this attitude, ground loads are assumed to...
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde
2017-01-01
In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bajargaan, Ruchi; Patel, Arvind
2018-04-01
One-dimensional unsteady adiabatic flow behind an exponential shock wave propagating in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux, which has exponentially varying azimuthal and axial fluid velocities, is investigated. The shock wave is driven out by a piston moving with time according to an exponential law. The dusty gas is taken to be a mixture of a non-ideal gas and small solid particles. The density of the ambient medium is assumed to be constant. The equilibrium flow conditions are maintained and energy is varying exponentially, which is continuously supplied by the piston. The heat conduction is expressed in the terms of Fourier's law, and the radiation is assumed of diffusion type for an optically thick grey gas model. The thermal conductivity and the absorption coefficient are assumed to vary with temperature and density according to a power law. The effects of the variation of heat transfer parameters, gravitation parameter and dusty gas parameters on the shock strength, the distance between the piston and the shock front, and on the flow variables are studied out in detail. It is interesting to note that the similarity solution exists under the constant initial angular velocity, and the shock strength is independent from the self gravitation, heat conduction and radiation heat flux.
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Generalized fish life-cycle poplulation model and computer program
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.
1978-03-01
A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexuallymore » mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.« less
ERIC Educational Resources Information Center
Malmberg, Kenneth J.; Annis, Jeffrey
2012-01-01
Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…
A Multilevel Testlet Model for Dual Local Dependence
ERIC Educational Resources Information Center
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying
2012-01-01
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model
ERIC Educational Resources Information Center
Tao, Wei; Cao, Yi
2016-01-01
Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…
Variants of Independence in the Perception of Facial Identity and Expression
ERIC Educational Resources Information Center
Fitousi, Daniel; Wenger, Michael J.
2013-01-01
A prominent theory in the face perception literature--the parallel-route hypothesis (Bruce & Young, 1986)--assumes a dedicated channel for the processing of identity that is separate and independent from the channel(s) in which nonidentity information is processed (e.g., expression, eye gaze). The current work subjected this assumption to…
Grammatical Gender and Inferences about Biological Properties in German-Speaking Children
ERIC Educational Resources Information Center
Saalbach, Henrik; Imai, Mutsumi; Schalk, Lennart
2012-01-01
In German, nouns are assigned to one of the three gender classes. For most animal names, however, the assignment is independent of the referent's biological sex. We examined whether German-speaking children understand this independence of grammar from semantics or whether they assume that grammatical gender is mapped onto biological sex when…
Ditunno, P L; Patrick, M; Stineman, M; Morganti, B; Townson, A F; Ditunno, J F
2006-09-01
Direct observation of a constrained consensus-building process in three culturally independent five-person panels of rehabilitation professionals from the US, Italy and Canada. To illustrate cultural differences in belief among rehabilitation professionals about the relative importance of alternative functional goals during spinal cord injury (SCI) rehabilitation. Spinal Cord Injury Units in Philadelphia-USA, Rome-Italy and Vancouver-Canada. Each of the three panels came to independent consensus about recovery priorities in SCI utilizing the features resource trade-off game. The procedure involves trading imagined levels of independence (resources) across different functional items (features) assuming different stages of recovery. Sphincter management was of primary importance to all three groups. The Italian and Canadian rehabilitation professionals, however, showed preference for walking over wheelchair mobility at lower stages of assumed recovery, whereas the US professionals set wheelchair independence at a higher priority than walking. These preliminary results suggest cross-cultural recovery priority differences among SCI rehabilitation professionals. These dissimilarities in preference may reflect disparities in values, cultural expectations and health care policies.
Causal discovery and inference: concepts and recent methodological advances.
Spirtes, Peter; Zhang, Kun
This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.
Comparison of across-frequency integration strategies in a binaural detection model.
Breebaart, Jeroen
2013-11-01
Breebaart et al. [J. Acoust. Soc. Am. 110, 1089-1104 (2001)] reported that the masker bandwidth dependence of detection thresholds for an out-of-phase signal and an in-phase noise masker (N0Sπ) can be explained by principles of integration of information across critical bands. In this paper, different methods for such across-frequency integration process are evaluated as a function of the bandwidth and notch width of the masker. The results indicate that an "optimal detector" model assuming independent internal noise in each critical band provides a better fit to experimental data than a best filter or a simple across-frequency integrator model. Furthermore, the exponent used to model peripheral compression influences the accuracy of predictions in notched conditions.
Gibbs-Donnan ratio and channel conductance of Tetrahymena cilia in mixed solution of K+ and Ca2+.
Oosawa, Y; Kasai, M
1988-01-01
A single cation-channel from Tetrahymena cilia was incorporated into planar lipid bilayers. This channel was voltage-independent and is permeable to K+ and Ca2+. In the experiments with mixed solutions where the concentrations of K+ and Ca2+ were varied, the single-channel conductance was found to be influenced by the Gibbs-Donnan ratio. The data are explained by assuming that the binding sites of this channel were always occupied by two potassium ions or one calcium ion under the present experimental conditions (5 mM-90 mM K+ and 0.5 mM-35 mM Ca2+) and these bound cations determined the channel conductivity. PMID:2462927
A model for foreign exchange markets based on glassy Brownian systems
Trinidad-Segovia, J. E.; Clara-Rahola, J.; Puertas, A. M.; De las Nieves, F. J.
2017-01-01
In this work we extend a well-known model from arrested physical systems, and employ it in order to efficiently depict different currency pairs of foreign exchange market price fluctuation distributions. We consider the exchange rate price in the time range between 2010 and 2016 at yearly time intervals and resolved at one minute frequency. We then fit the experimental datasets with this model, and find significant qualitative symmetry between price fluctuation distributions from the currency market, and the ones belonging to colloidal particles position in arrested states. The main contribution of this paper is a well-known physical model that does not necessarily assume the independent and identically distributed (i.i.d.) restrictive condition. PMID:29206868
Anandhakumar, Jayamani; Moustafa, Yara W; Chowdhary, Surabhi; Kainth, Amoldeep S; Gross, David S
2016-07-15
Mediator is an evolutionarily conserved coactivator complex essential for RNA polymerase II transcription. Although it has been generally assumed that in Saccharomyces cerevisiae, Mediator is a stable trimodular complex, its structural state in vivo remains unclear. Using the "anchor away" (AA) technique to conditionally deplete select subunits within Mediator and its reversibly associated Cdk8 kinase module (CKM), we provide evidence that Mediator's tail module is highly dynamic and that a subcomplex consisting of Med2, Med3, and Med15 can be independently recruited to the regulatory regions of heat shock factor 1 (Hsf1)-activated genes. Fluorescence microscopy of a scaffold subunit (Med14)-anchored strain confirmed parallel cytoplasmic sequestration of core subunits located outside the tail triad. In addition, and contrary to current models, we provide evidence that Hsf1 can recruit the CKM independently of core Mediator and that core Mediator has a role in regulating postinitiation events. Collectively, our results suggest that yeast Mediator is not monolithic but potentially has a dynamic complexity heretofore unappreciated. Multiple species, including CKM-Mediator, the 21-subunit core complex, the Med2-Med3-Med15 tail triad, and the four-subunit CKM, can be independently recruited by activated Hsf1 to its target genes in AA strains. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
MCMC Sampling for a Multilevel Model with Nonindependent Residuals within and between Cluster Units
ERIC Educational Resources Information Center
Browne, William; Goldstein, Harvey
2010-01-01
In this article, we discuss the effect of removing the independence assumptions between the residuals in two-level random effect models. We first consider removing the independence between the Level 2 residuals and instead assume that the vector of all residuals at the cluster level follows a general multivariate normal distribution. We…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, Anne; Saur, Joachim, E-mail: schreiner@geo.uni-koeln.de
In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, i.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation modelmore » for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave–particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.« less
NASA Astrophysics Data System (ADS)
Schreiner, Anne; Saur, Joachim
2017-02-01
In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, I.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation model for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave-particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.
Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2
Field, Edward H.; Gupta, Vipin
2008-01-01
This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the main part of this report.
A strategy for evaluating pathway analysis methods.
Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques
2017-10-13
Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.
NASA Astrophysics Data System (ADS)
Daksha, M.; Derzsi, A.; Wilczek, S.; Trieschmann, J.; Mussenbrock, T.; Awakowicz, P.; Donkó, Z.; Schulze, J.
2017-08-01
In particle-in-cell/Monte Carlo collisions (PIC/MCC) simulations of capacitively coupled plasmas (CCPs), the plasma-surface interaction is generally described by a simple model in which a constant secondary electron emission coefficient (SEEC) is assumed for ions bombarding the electrodes. In most PIC/MCC studies of CCPs, this coefficient is set to γ = 0.1, independent of the energy of the incident particle, the electrode material, and the surface conditions. Here, the effects of implementing energy-dependent secondary electron yields for ions, fast neutrals, and taking surface conditions into account in PIC/MCC simulations is investigated. Simulations are performed using self-consistently calculated effective SEECs, {γ }* , for ‘clean’ (e.g., heavily sputtered) and ‘dirty’ (e.g., oxidized) metal surfaces in single- and dual-frequency discharges in argon and the results are compared to those obtained by assuming a constant secondary electron yield of γ =0.1 for ions. In single-frequency (13.56 MHz) discharges operated under conditions of low heavy particle energies at the electrodes, the pressure and voltage at which the transition between the α- and γ-mode electron power absorption occurs are found to strongly depend on the surface conditions. For ‘dirty’ surfaces, the discharge operates in α-mode for all conditions investigated due to a low effective SEEC. In classical dual-frequency (1.937 MHz + 27.12 MHz) discharges {γ }* significantly increases with increasing low-frequency voltage amplitude, {V}{LF}, for dirty surfaces. This is due to the effect of {V}{LF} on the heavy particle energies at the electrodes, which negatively influences the quality of the separate control of ion properties at the electrodes. The new results on the separate control of ion properties in such discharges indicate significant differences compared to previous results obtained with different constant values of γ.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis
Cui, Hengjian; Li, Runze
2014-01-01
This work is concerned with marginal sure independence feature screening for ultra-high dimensional discriminant analysis. The response variable is categorical in discriminant analysis. This enables us to use conditional distribution function to construct a new index for feature screening. In this paper, we propose a marginal feature screening procedure based on empirical conditional distribution function. We establish the sure screening and ranking consistency properties for the proposed procedure without assuming any moment condition on the predictors. The proposed procedure enjoys several appealing merits. First, it is model-free in that its implementation does not require specification of a regression model. Second, it is robust to heavy-tailed distributions of predictors and the presence of potential outliers. Third, it allows the categorical response having a diverging number of classes in the order of O(nκ) with some κ ≥ 0. We assess the finite sample property of the proposed procedure by Monte Carlo simulation studies and numerical comparison. We further illustrate the proposed methodology by empirical analyses of two real-life data sets. PMID:26392643
46 CFR 172.150 - Survival conditions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Subchapter O of This Chapter § 172.150 Survival conditions. A tankship is presumed to survive assumed damage...) Each submerged opening must be weathertight. (d) Progressive flooding. Pipes, ducts or tunnels within the assumed extent of damage must be either— (1) Equipped with arrangements such as stop check valves...
46 CFR 172.150 - Survival conditions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Subchapter O of This Chapter § 172.150 Survival conditions. A tankship is presumed to survive assumed damage...) Each submerged opening must be weathertight. (d) Progressive flooding. Pipes, ducts or tunnels within the assumed extent of damage must be either— (1) Equipped with arrangements such as stop check valves...
46 CFR 172.150 - Survival conditions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Subchapter O of This Chapter § 172.150 Survival conditions. A tankship is presumed to survive assumed damage...) Each submerged opening must be weathertight. (d) Progressive flooding. Pipes, ducts or tunnels within the assumed extent of damage must be either— (1) Equipped with arrangements such as stop check valves...
Score tests for independence in semiparametric competing risks models.
Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul
2009-12-01
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.
Rogers, L J; Douglas, R R
1984-02-01
In this paper (the second in a series), we consider a (generic) pair of datasets, which have been analyzed by the techniques of the previous paper. Thus, their "stable subspaces" have been established by comparative factor analysis. The pair of datasets must satisfy two confirmable conditions. The first is the "Inclusion Condition," which requires that the stable subspace of one of the datasets is nearly identical to a subspace of the other dataset's stable subspace. On the basis of that, we have assumed the pair to have similar generating signals, with stochastically independent generators. The second verifiable condition is that the (presumed same) generating signals have distinct ratios of variances for the two datasets. Under these conditions a small elaboration of some elementary linear algebra reduces the rotation problem to several eigenvalue-eigenvector problems. Finally, we emphasize that an analysis of each dataset by the method of Douglas and Rogers (1983) is an essential prerequisite for the useful application of the techniques in this paper. Nonempirical methods of estimating the number of factors simply will not suffice, as confirmed by simulations reported in the previous paper.
Lattice Independent Component Analysis for Mobile Robot Localization
NASA Astrophysics Data System (ADS)
Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz
This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).
The Title and Three Core Values from the First Three Lines of The Declaration of Independence
ERIC Educational Resources Information Center
White, Kenneth Michael
2013-01-01
Teaching the Declaration of Independence can be a challenge. This article presents a lesson plan based on an explication of the title and the first three lines of the Declaration intended to make the American founding era relevant to today's college students. Assuming civic education is a major goal of teaching American Government, assuming…
ERIC Educational Resources Information Center
Inner London Education Authority (England).
The Independent Learning Project for Advanced Chemistry (ILPAC) has produced units of study for students in A-level chemistry. Students completing ILPAC units assume a greater responsibility for their own learning and can work, to some extent, at their own pace. By providing guidance, and detailed solutions to exercises in the units, supported by…
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.
Deng, Peng; Kavehrad, Mohsen; Liu, Zhiwen; Zhou, Zhou; Yuan, Xiuhua
2013-07-01
We study the average capacity performance for multiple-input multiple-output (MIMO) free-space optical (FSO) communication systems using multiple partially coherent beams propagating through non-Kolmogorov strong turbulence, assuming equal gain combining diversity configuration and the sum of multiple gamma-gamma random variables for multiple independent partially coherent beams. The closed-form expressions of scintillation and average capacity are derived and then used to analyze the dependence on the number of independent diversity branches, power law α, refractive-index structure parameter, propagation distance and spatial coherence length of source beams. Obtained results show that, the average capacity increases more significantly with the increase in the rank of MIMO channel matrix compared with the diversity order. The effect of the diversity order on the average capacity is independent of the power law, turbulence strength parameter and spatial coherence length, whereas these effects on average capacity are gradually mitigated as the diversity order increases. The average capacity increases and saturates with the decreasing spatial coherence length, at rates depending on the diversity order, power law and turbulence strength. There exist optimal values of the spatial coherence length and diversity configuration for maximizing the average capacity of MIMO FSO links over a variety of atmospheric turbulence conditions.
Teff, Karen L; Rickels, Michael R; Grudziak, Joanna; Fuller, Carissa; Nguyen, Huong-Lan; Rickels, Karl
2013-09-01
Atypical antipsychotic (AAP) medications that have revolutionized the treatment of mental illness have become stigmatized by metabolic side effects, including obesity and diabetes. It remains controversial whether the defects are treatment induced or disease related. Although the mechanisms underlying these metabolic defects are not understood, it is assumed that the initiating pathophysiology is weight gain, secondary to centrally mediated increases in appetite. To determine if the AAPs have detrimental metabolic effects independent of weight gain or psychiatric disease, we administered olanzapine, aripiprazole, or placebo for 9 days to healthy subjects (n = 10, each group) under controlled in-patient conditions while maintaining activity levels. Prior to and after the interventions, we conducted a meal challenge and a euglycemic-hyperinsulinemic clamp to evaluate insulin sensitivity and glucose disposal. We found that olanzapine, an AAP highly associated with weight gain, causes significant elevations in postprandial insulin, glucagon-like peptide 1 (GLP-1), and glucagon coincident with insulin resistance compared with placebo. Aripiprazole, an AAP considered metabolically sparing, induces insulin resistance but has no effect on postprandial hormones. Importantly, the metabolic changes occur in the absence of weight gain, increases in food intake and hunger, or psychiatric disease, suggesting that AAPs exert direct effects on tissues independent of mechanisms regulating eating behavior.
NASA Astrophysics Data System (ADS)
Steele-MacInnis, M.; Barkoff, D. W.; Ashley, K.
2017-12-01
Thermobarometry of metasomatic rocks is commonly challenging, owing to the high variance of hydrothermal mineral assemblages, thermodynamic disequilibrium and overprinting by subsequent hydrothermal episodes. Here, we estimate formation pressures of a Cu-Fe-sulfide-bearing andradite-diopside skarn deposit at Casting Copper (Yerington district, NV) using Raman spectroscopy and elastic modeling of apatite inclusions in garnet. Andradite garnet from the Casting Copper skarn contains inclusions of hydroxyl-fluorapatite, calcite, hematite, magnetite, and ilmenite. Raman spectroscopy reveals that the apatite inclusions are predominantly under tension of -23 to -123 MPa at ambient conditions. Elastic modeling of apatite-in-garnet suggest entrapment occurred at 10 to 115 MPa, assuming a trapping temperature of 400 °C, which is consistent with paleodepth estimates of 2-3 km. These results provide independent constraints on the conditions of hydrothermal skarn formation at Casting Copper, and suggest that this approach may be applied to other, less-constrained skarn systems.
In the shade of a forest status, reputation, and ambiguity in an online microcredit market.
Kuwabara, Ko; Anthony, Denise; Horne, Christine
2017-05-01
Scholars have long recognized status and reputation as pervasive forces reproducing comparative advantage in social and economic systems. Yet, due in part to methodological challenges, relatively few studies have examined how status and reputation interact. We use data from an online market for peer-to-peer lending to study independent and joint effects of status and reputation on borrowers' success at obtaining loans. First, we find a positive main effect of status, even when reputational signals are reliable and abundant. Second, we find that status matters the most for borrowers with moderate (rather than high or low) reputations, suggesting a curvilinear effect of status x reputation on loans. These results support the idea that status matters not only under conditions of too little information that creates information asymmetry, as typically assumed, but also under conditions of abundant information and too many choices that creates ambiguity about how to evaluate candidates. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krčo, Marko; Goldsmith, Paul F., E-mail: marko@astro.cornell.edu
2016-05-01
We present a geometry-independent method for determining the shapes of radial volume density profiles of astronomical objects whose geometries are unknown, based on a single column density map. Such profiles are often critical to understand the physics and chemistry of molecular cloud cores, in which star formation takes place. The method presented here does not assume any geometry for the object being studied, thus removing a significant source of bias. Instead, it exploits contour self-similarity in column density maps, which appears to be common in data for astronomical objects. Our method may be applied to many types of astronomical objectsmore » and observable quantities so long as they satisfy a limited set of conditions, which we describe in detail. We derive the method analytically, test it numerically, and illustrate its utility using 2MASS-derived dust extinction in molecular cloud cores. While not having made an extensive comparison of different density profiles, we find that the overall radial density distribution within molecular cloud cores is adequately described by an attenuated power law.« less
Optimum flight paths of turbojet aircraft
NASA Technical Reports Server (NTRS)
Miele, Angelo
1955-01-01
The climb of turbojet aircraft is analyzed and discussed including the accelerations. Three particular flight performances are examined: minimum time of climb, climb with minimum fuel consumption, and steepest climb. The theoretical results obtained from a previous study are put in a form that is suitable for application on the following simplifying assumptions: the Mach number is considered an independent variable instead of the velocity; the variations of the airplane mass due to fuel consumption are disregarded; the airplane polar is assumed to be parabolic; the path curvatures and the squares of the path angles are disregarded in the projection of the equation of motion on the normal to the path; lastly, an ideal turbojet with performance independent of the velocity is involved. The optimum Mach number for each flight condition is obtained from the solution of a sixth order equation in which the coefficients are functions of two fundamental parameters: the ratio of minimum drag in level flight to the thrust and the Mach number which represents the flight at constant altitude and maximum lift-drag ratio.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Carleton, O.
1972-01-01
Consideration is given specifically to sixth order elliptic partial differential equations in two independent real variables x, y such that the coefficients of the highest order terms are real constants. It is assumed that the differential operator has distinct characteristics and that it can be factored as a product of second order operators. By analytically continuing into the complex domain and using the complex characteristic coordinates of the differential equation, it is shown that its solutions, u, may be reflected across analytic arcs on which u satisfies certain analytic boundary conditions. Moreover, a method is given whereby one can determine a region into which the solution is extensible. It is seen that this region of reflection is dependent on the original domain of difinition of the solution, the arc and the coefficients of the highest order terms of the equation and not on any sufficiently small quantities; i.e., the reflection is global in nature. The method employed may be applied to similar differential equations of order 2n.
Audible Beaconing with Accessible Pedestrian Signals
Barlow, Janet M.; Scott, Alan C.; Bentzen, Billie Louise
2010-01-01
Purpose Although Accessible Pedestrian Signals (APS) are often assumed to provide wayfinding information, the type of APS that has been typically installed in the U.S has not had positive effects on finding crosswalks, locating pushbuttons, or providing directional guidance. This paper reports the results of research on crossings by blind pedestrians at complex signalized intersections, before and after the installation of APS with innovative audible beaconing features, designed to improve wayfinding. Methods Objective data on measures of street crossing performance by 56 participants was obtained at four intersections, two each in Charlotte, NC, and Portland, OR. Results In the first round of testing, APS with beaconing features resulted in only slightly improved wayfinding. Revisions to the audible beaconing features resulted in improved performance on four measures of wayfinding as compared to the pre-installation condition: beginning crossings within the crosswalk, ending crossings within the crosswalk, independence in finding the starting location, and independence in aligning to cross. Implications for Practice Use of APS that provide beaconing from the far-end of the crosswalk show promise of improving wayfinding at street crossings. PMID:20622978
Polymorphism in the two-locus Levene model with nonepistatic directional selection.
Bürger, Reinhard
2009-11-01
For the Levene model with soft selection in two demes, the maintenance of polymorphism at two diallelic loci is studied. Selection is nonepistatic and dominance is intermediate. Thus, there is directional selection in every deme and at every locus. We assume that selection is in opposite directions in the two demes because otherwise no polymorphism is possible. If at one locus there is no dominance, then a complete analysis of the dynamical and equilibrium properties is performed. In particular, a simple necessary and sufficient condition for the existence of an internal equilibrium and sufficient conditions for global asymptotic stability are obtained. These results are extended to deme-independent degree of dominance at one locus. A perturbation analysis establishes structural stability within the full parameter space. In the absence of genotype-environment interaction, which requires deme-independent dominance at both loci, nongeneric equilibrium behavior occurs, and the introduction of arbitrarily small genotype-environment interaction changes the equilibrium structure and may destroy stable polymorphism. The volume of the parameter space for which a (stable) two-locus polymorphism is maintained is computed numerically. It is investigated how this volume depends on the strength of selection and on the dominance relations. If the favorable allele is (partially) dominant in its deme, more than 20% of all parameter combinations lead to a globally asymptotically stable, fully polymorphic equilibrium.
Transferability of Dual-Task Coordination Skills after Practice with Changing Component Tasks
Schubert, Torsten; Liepelt, Roman; Kübler, Sebastian; Strobach, Tilo
2017-01-01
Recent research has demonstrated that dual-task performance with two simultaneously presented tasks can be substantially improved as a result of practice. Among other mechanisms, theories of dual-task practice-relate this improvement to the acquisition of task coordination skills. These skills are assumed (1) to result from dual-task practice, but not from single-task practice, and (2) to be independent from the specific stimulus and response mappings during the practice situation and, therefore, transferable to new dual task situations. The present study is the first that provides an elaborated test of these assumptions in a context with well-controllable practice and transfer situations. To this end, we compared the effects of dual-task and single-task practice with a visual and an auditory sensory-motor component task on the dual-task performance in a subsequent transfer session. Importantly, stimulus and stimulus-response mapping conditions in the two component tasks changed repeatedly during practice sessions, which prevents that automatized stimulus-response associations may be transferred from practice to transfer. Dual-task performance was found to be improved after practice with the dual tasks in contrast to the single-task practice. These findings are consistent with the assumption that coordination skills had been acquired, which can be transferred to other dual-task situations independently on the specific stimulus and response mapping conditions of the practiced component tasks. PMID:28659844
Understanding the Elementary Steps in DNA Tile-Based Self-Assembly.
Jiang, Shuoxing; Hong, Fan; Hu, Huiyu; Yan, Hao; Liu, Yan
2017-09-26
Although many models have been developed to guide the design and implementation of DNA tile-based self-assembly systems with increasing complexity, the fundamental assumptions of the models have not been thoroughly tested. To expand the quantitative understanding of DNA tile-based self-assembly and to test the fundamental assumptions of self-assembly models, we investigated DNA tile attachment to preformed "multi-tile" arrays in real time and obtained the thermodynamic and kinetic parameters of single tile attachment in various sticky end association scenarios. With more sticky ends, tile attachment becomes more thermostable with an approximately linear decrease in the free energy change (more negative). The total binding free energy of sticky ends is partially compromised by a sequence-independent energy penalty when tile attachment forms a constrained configuration: "loop". The minimal loop is a 2 × 2 tetramer (Loop4). The energy penalty of loops of 4, 6, and 8 tiles was analyzed with the independent loop model assuming no interloop tension, which is generalizable to arbitrary tile configurations. More sticky ends also contribute to a faster on-rate under isothermal conditions when nucleation is the rate-limiting step. Incorrect sticky end contributes to neither the thermostability nor the kinetics. The thermodynamic and kinetic parameters of DNA tile attachment elucidated here will contribute to the future improvement and optimization of tile assembly modeling, precise control of experimental conditions, and structural design for error-free self-assembly.
General algebraic method applied to control analysis of complex engine types
NASA Technical Reports Server (NTRS)
Boksenbom, Aaron S; Hood, Richard
1950-01-01
A general algebraic method of attack on the problem of controlling gas-turbine engines having any number of independent variables was utilized employing operational functions to describe the assumed linear characteristics for the engine, the control, and the other units in the system. Matrices were used to describe the various units of the system, to form a combined system showing all effects, and to form a single condensed matrix showing the principal effects. This method directly led to the conditions on the control system for noninteraction so that any setting disturbance would affect only its corresponding controlled variable. The response-action characteristics were expressed in terms of the control system and the engine characteristics. The ideal control-system characteristics were explicitly determined in terms of any desired response action.
Hodder, Joanne N; La Delfa, Nicholas J; Potvin, Jim R
2016-08-01
To predict shoulder strength, most current ergonomics software assume independence of the strengths about each of the orthopedic axes. Using this independent axis approach (IAA), the shoulder can be predicted to have strengths as high as the resultant of the maximum moment about any two or three axes. We propose that shoulder strength is not independent between axes, and propose an approach that calculates the weighted average (WAA) between the strengths of the axes involved in the demand. Fifteen female participants performed maximum isometric shoulder exertions with their right arm placed in a rigid adjustable brace affixed to a tri-axial load cell. Maximum exertions were performed in 24 directions, including four primary directions, horizontal flexion-extension, abduction-adduction, and at 15° increments in between those axes. Moments were computed and comparisons made between the experimentally collected strengths and those predicted by the IAA and WAA methods. The IAA over-predicted strength in 14 of 20 non-primary exertions directions, while the WAA underpredicted strength in only 2 of these directions. Therefore, it is not valid to assume that shoulder axes are independent when predicting shoulder strengths between two orthopedic axes, and the WAA is an improvement over current methods for the posture tested. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aerothermodynamic environment of a Titan aerocapture vehicle
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Chow, H.
1982-01-01
The extent of convective and radiative heating for a Titan aerocapture vehicle is investigated. The flow in the shock layer is assumed to be axisymmetric, steady, viscous, and compressible. It is further assumed that the gas is in chemical and local thermodynamic equilibrium and tangent slab approximation is used for the radiative transport. The effect of the slip boundary conditions on the body surface and at the shock wave are included in the analysis of high-altitude entry conditions. The implicit finite difference techniques is used to solve the viscous shock-layer equations for a 45 degree sphere cone at zero angle of attack. Different compositions for the Titan atmosphere are assumed, and results are obtained for the entry conditions specified by the Jet Propulsion Laboratory.
Maintenance cost, toppling risk and size of trees in a self-thinning stand.
Larjavaara, Markku
2010-07-07
Wind routinely topples trees during storms, and the likelihood that a tree is toppled depends critically on its allometry. Yet none of the existing theories to explain tree allometry consider wind drag on tree canopies. Since leaf area index in crowded, self-thinning stands is independent of stand density, the drag force per unit land can also be assumed to be independent of stand density, with only canopy height influencing the total toppling moment. Tree stem dimensions and the self-thinning biomass can then be computed by further assuming that the risk of toppling over and stem maintenance per unit land area are independent of stand density, and that stem maintenance cost is a linear function of stem surface area and sapwood volume. These assumptions provide a novel way to understand tree allometry and lead to a self-thinning line relating tree biomass and stand density with a power between -3/2 and -2/3 depending on the ratio of maintenance of sapwood and stem surface. (c) 2010 Elsevier Ltd. All rights reserved.
Inferences about unobserved causes in human contingency learning.
Hagmayer, York; Waldmann, Michael R
2007-03-01
Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.
Hellström-Hyson, Eva; Mårtensson, Gunilla; Kristofferzon, Marja-Leena
2012-01-01
The present study aimed at describing how nursing students engaged in their clinical practice experienced two models of supervision: supervision on student wards and traditional supervision. Supervision for nursing students in clinical practice can be organized in different ways. In the present study, parts of nursing students' clinical practice were carried out on student wards in existing hospital departments. The purpose was to give students the opportunity to assume greater responsibility for their clinical education and to apply the nursing process more independently through peer learning. A descriptive design with a qualitative approach was used. Interviews were carried out with eight nursing students in their final semester of a 3-year degree program in nursing. The data were analyzed using content analysis. Two themes were revealed in the data analysis: When supervised on the student wards, nursing students experienced assuming responsibility and finding one's professional role, while during traditional supervision, they experienced being an onlooker and having difficulties assuming responsibility. Supervision on a student ward was found to give nursing students a feeling of acknowledgment and more opportunities to develop independence, continuity, cooperation and confidence. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
Psychometric functions for informational masking
NASA Astrophysics Data System (ADS)
Lutfi, Robert A.; Kistler, Doris J.; Callahan, Michael R.; Wightman, Frederic L.
2003-04-01
The method of constant stimuli was used to obtain complete psychometric functions (PFs) from 44 normal-hearing listeners in conditions known to produce varying amounts of informational masking. The task was to detect a pure-tone signal in the presence of a broadband noise and in the presence of multitone maskers with frequencies and amplitudes that varied at random from one presentation to the next. Relative to the broadband noise condition, significant reductions were observed in both the slope and the upper asymptote of the PF for multitone maskers producing large amounts of informational masking. Slope was affected more for some listeners while asymptote was affected more for others. Mean slopes and asymptotes varied nonmonotonically with the number of masker components in much the same manner as mean thresholds. The results are consistent with a model that assumes trial-by-trial judgments are based on a weighted sum of dB levels at the output of independent auditory filters. For many listeners, however, the weights appear to reflect how often a nonsignal auditory filter is mistaken for the signal filter. For these listeners adaptive procedures may produce a significant bias in the estimates of threshold for conditions of informational masking. [Work supported by NIDCD.
Processing load induced by informational masking is related to linguistic abilities.
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Rönnberg, Jerker; Kramer, Sophia E
2012-01-01
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.
Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Miura, Naoki; Akitsuki, Yuko; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta
2006-10-01
Multiple brain networks may support visual self-recognition. It has been hypothesized that the left ventral occipito-temporal cortex processes one's own face as a symbol, and the right parieto-frontal network processes self-image in association with motion-action contingency. Using functional magnetic resonance imaging, we first tested these hypotheses based on the prediction that these networks preferentially respond to a static self-face and to moving one's whole body, respectively. Brain activation specifically related to self-image during familiarity judgment was compared across four stimulus conditions comprising a two factorial design: factor Motion contrasted picture (Picture) and movie (Movie), and factor Body part a face (Face) and whole body (Body). Second, we attempted to segregate self-specific networks using a principal component analysis (PCA), assuming an independent pattern of inter-subject variability in activation over the four stimulus conditions in each network. The bilateral ventral occipito-temporal and the right parietal and frontal cortices exhibited self-specific activation. The left ventral occipito-temporal cortex exhibited greater self-specific activation for Face than for Body, in Picture, consistent with the prediction for this region. The activation profiles of the right parietal and frontal cortices did not show preference for Movie Body predicted by the assumed roles of these regions. The PCA extracted two cortical networks, one with its peaks in the right posterior, and another in frontal cortices; their possible roles in visuo-spatial and conceptual self-representations, respectively, were suggested by previous findings. The results thus supported and provided evidence of multiple brain networks for visual self-recognition.
Morrongiello, Barbara A; Corbett, Michael
2015-10-01
The aim of this study was to compare parents' expectations for their children crossing streets with children's actual crossing behaviours and determine how accurately parents judge their own children's pedestrian behaviours to be. Using a fully immersive virtual reality system interfaced with a 3D movement measurement system, younger (7-9 years) and older (10-12 years) children's crossing behaviours were assessed. The parent viewed the same traffic conditions and indicated if their child would cross and how successful she/he expected the child would be when doing so. Comparing children's performance with what their parents expected they would do revealed that parents significantly overestimated the inter-vehicle gap threshold of their children, erroneously assuming that children would show safer pedestrian behaviours and select larger inter-vehicle gaps to cross into than they actually did; there were no effects of child age or sex. Child and parent scores were not correlated and a logistic regression indicated these were independent of one another. Parents were not accurate in estimating the traffic conditions under which their children would try and cross the street. If parents are not adequately supervising when children cross streets, they may be placing their children at risk of pedestrian injury because they are assuming their children will select larger (safer) inter-vehicle gaps when crossing than children actually do. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Clifford, Jacob; Adami, Christoph
2015-09-02
Transcription factor binding to the surface of DNA regulatory regions is one of the primary causes of regulating gene expression levels. A probabilistic approach to model protein-DNA interactions at the sequence level is through position weight matrices (PWMs) that estimate the joint probability of a DNA binding site sequence by assuming positional independence within the DNA sequence. Here we construct conditional PWMs that depend on the motif signatures in the flanking DNA sequence, by conditioning known binding site loci on the presence or absence of additional binding sites in the flanking sequence of each site's locus. Pooling known sites with similar flanking sequence patterns allows for the estimation of the conditional distribution function over the binding site sequences. We apply our model to the Dorsal transcription factor binding sites active in patterning the Dorsal-Ventral axis of Drosophila development. We find that those binding sites that cooperate with nearby Twist sites on average contain about 0.5 bits of information about the presence of Twist transcription factor binding sites in the flanking sequence. We also find that Dorsal binding site detectors conditioned on flanking sequence information make better predictions about what is a Dorsal site relative to background DNA than detection without information about flanking sequence features.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Model-independent constraints on Lorentz invariance violation via the cosmographic approach
NASA Astrophysics Data System (ADS)
Zou, Xiao-Bo; Deng, Hua-Kai; Yin, Zhao-Yu; Wei, Hao
2018-01-01
Since Lorentz invariance plays an important role in modern physics, it is of interest to test the possible Lorentz invariance violation (LIV). The time-lag (the arrival time delay between light curves in different energy bands) of Gamma-ray bursts (GRBs) has been extensively used to this end. However, to our best knowledge, one or more particular cosmological models were assumed a priori in (almost) all of the relevant works in the literature. So, this makes the results on LIV in those works model-dependent and hence not so robust in fact. In the present work, we try to avoid this problem by using a model-independent approach. We calculate the time delay induced by LIV with the cosmic expansion history given in terms of cosmography, without assuming any particular cosmological model. Then, we constrain the possible LIV with the observational data, and find weak hints for LIV.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.
Dawson, Michael R W; Gupta, Maya
2017-01-01
Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.
Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability
2017-01-01
Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. PMID:28212422
Two Back Stress Hardening Models in Rate Independent Rigid Plastic Deformation
NASA Astrophysics Data System (ADS)
Yun, Su-Jin
In the present work, the constitutive relations based on the combination of two back stresses are developed using the Armstrong-Frederick, Phillips and Ziegler’s type hardening rules. Various evolutions of the kinematic hardening parameter can be obtained by means of a simple combination of back stress rate using the rule of mixtures. Thus, a wide range of plastic deformation behavior can be depicted depending on the dominant back stress evolution. The ultimate back stress is also determined for the present combined kinematic hardening models. Since a kinematic hardening rule is assumed in the finite deformation regime, the stress rate is co-rotated with respect to the spin of substructure obtained by incorporating the plastic spin concept. A comparison of the various co-rotational rates is also included. Assuming rigid plasticity, the continuum body consists of the elastic deformation zone and the plastic deformation zone to form a hybrid finite element formulation. Then, the plastic deformation behavior is investigated under various loading conditions with an assumption of the J2 deformation theory. The plastic deformation localization turns out to be strongly dependent on the description of back stress evolution and its associated hardening parameters. The analysis for the shear deformation with fixed boundaries is carried out to examine the deformation localization behavior and the evolution of state variables.
Models of electroosmotic flow in micro- and nanochannels
NASA Astrophysics Data System (ADS)
Zheng, Z.; Conlisk, A. T.; Sadr, R.; Yoda, M.
2003-11-01
Understanding electrooosmotic flow (EOF) is essential for developing efficient drug delivery and rapid biomolecular analysis devices given the extremely high pressure gradients required to drive flows through channels smaller than about 10 μ m. We consider fully-developed and steady EOF in one- and two-dimensional micro- and nanochannel geometries. The fluid, which is assumed to behave as a continuum, is a mixture of a neutral solvent such as water and a salt where the ionic species are entirely dissociated. The model can be used to analyze EOF where the opposite channel walls are oppositely charged and EOF with arbitrary electric double layer thickness. Unlike most previous models which assume a wall ζ -potential a priori, the model calculates the boundary conditions for the (wall) mole fractions using the equilibrium electrochemical potential in the upstream reservoir. We can therefore predict the wall ζ -potential, and calculate EOF with spatially and temporally varying wall ζ -potentials. The model results for electroosmotic mobility and volumetric flow rate are compared with those from three independent experimental datasets, and found to be in good agreement with all three sets of experimental data for channel sizes ranging from O(10 nm) to O(10 μ m). The limits of the continuum theory for EOF are discussed.
Caregiving, Perceptions of Maternal Favoritism, and Tension Among Siblings
Suitor, J. Jill; Gilligan, Megan; Johnson, Kaitlin; Pillemer, Karl
2014-01-01
Purpose: Studies of later-life families have revealed that sibling tension often increases in response to parents’ need for care. Both theory and research on within-family differences suggest that when parents’ health declines, sibling relations may be affected by which children assume care and whether siblings perceive that the parent favors some offspring over others. In the present study, we explore the ways in which these factors shape sibling tension both independently and in combination during caregiving. Design and Methods: In this article, we use data collected from 450 adult children nested within 214 later-life families in which the offspring reported that their mothers needed care within 2 years prior to the interview. Results: Multilevel analyses demonstrated that providing care and perceiving favoritism regarding future caregiving were associated with sibling tension following mothers’ major health events. Further, the effects of caregiving on sibling tension were greater when perceptions of favoritism were also present. Implications: These findings shed new light on the conditions under which adult children are likely to experience high levels of sibling tension during caregiving. Understanding these processes is important because siblings are typically the individuals to whom caregivers are most likely to turn for support when assuming care of older parents, yet these relationships are often a major source of interpersonal stress. PMID:23811753
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
NASA Technical Reports Server (NTRS)
Royer, A.; Picard, G.; Arnaud, L.; Brucker, L.; Fily, M..
2014-01-01
Space-borne microwave radiometers are among the most useful tools to study snow and to collect information on the Antarctic climate. They have several advantages over other remote sensing techniques: high sensitivity to snow properties of interest (temperature, grain size, density), subdaily coverage in the polar regions, and their observations are independent of cloud conditions and solar illumination. Thus, microwave radiometers are widely used to retrieve information over snow-covered regions. For the Antarctic Plateau, many studies presenting retrieval algorithms or numerical simulations have assumed, explicitly or not, that the subpixel-scale heterogeneity is negligible and that the retrieved properties were representative of whole pixels. In this presentation, we investigate the spatial variations of brightness temperature over arange of a few kilometers in the Dome C area (Antarctic Plateau).
Enrichment analysis in high-throughput genomics - accounting for dependency in the NULL.
Gold, David L; Coombes, Kevin R; Wang, Jing; Mallick, Bani
2007-03-01
Translating the overwhelming amount of data generated in high-throughput genomics experiments into biologically meaningful evidence, which may for example point to a series of biomarkers or hint at a relevant pathway, is a matter of great interest in bioinformatics these days. Genes showing similar experimental profiles, it is hypothesized, share biological mechanisms that if understood could provide clues to the molecular processes leading to pathological events. It is the topic of further study to learn if or how a priori information about the known genes may serve to explain coexpression. One popular method of knowledge discovery in high-throughput genomics experiments, enrichment analysis (EA), seeks to infer if an interesting collection of genes is 'enriched' for a Consortium particular set of a priori Gene Ontology Consortium (GO) classes. For the purposes of statistical testing, the conventional methods offered in EA software implicitly assume independence between the GO classes. Genes may be annotated for more than one biological classification, and therefore the resulting test statistics of enrichment between GO classes can be highly dependent if the overlapping gene sets are relatively large. There is a need to formally determine if conventional EA results are robust to the independence assumption. We derive the exact null distribution for testing enrichment of GO classes by relaxing the independence assumption using well-known statistical theory. In applications with publicly available data sets, our test results are similar to the conventional approach which assumes independence. We argue that the independence assumption is not detrimental.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
NASA Astrophysics Data System (ADS)
Weijtjens, Wout; Lataire, John; Devriendt, Christof; Guillaume, Patrick
2014-12-01
Periodical loads, such as waves and rotating machinery, form a problem for operational modal analysis (OMA). In OMA only the vibrations of a structure of interest are measured and little to nothing is known about the loads causing these vibrations. Therefore, it is often assumed that all dynamics in the measured data are linked to the system of interest. Periodical loads defy this assumption as their periodical behavior is often visible within the measured vibrations. As a consequence most OMA techniques falsely associate the dynamics of the periodical load with the system of interest. Without additional information about the load, one is not able to correctly differentiate between structural dynamics and the dynamics of the load. In several applications, e.g. turbines and helicopters, it was observed that because of periodical loads one was unable to correctly identify one or multiple modes. Transmissibility based OMA (TOMA) is a completely different approach to OMA. By using transmissibility functions to estimate the structural dynamics of the system of interest, all influence of the load-spectrum can be eliminated. TOMA therefore allows to identify the modal parameters without being influenced by the presence of periodical loads, such as harmonics. One of the difficulties of TOMA is that the analyst is required to find two independent datasets, each associated with a different loading condition of the system of interest. This poses a dilemma for TOMA; how can an analyst identify two different loading conditions when little is known about the loads on the system? This paper tackles that problem by assuming that the loading conditions vary continuously over time, e.g. the changing wind directions. From this assumption TOMA is developed into a time-varying framework. This development allows TOMA to not only cope with the continuously changing loading conditions. The time-varying framework also enables the identification of the modal parameters from a single dataset. Moreover, the time-varying TOMA approach can be implemented in such a way that the analyst no longer has to identify different loading conditions. For these combined reasons the time-varying TOMA is less dependent on the user and requires less testing time than the earlier TOMA-technique.
Calcineurin inhibition blocks within-, but not between-session fear extinction in mice
Moulin, Thiago C.; Carneiro, Clarissa F. D.; Gonçalves, Marina M. C.; Junqueira, Lara S.; Amaral, Olavo B.
2015-01-01
Memory extinction involves the formation of a new associative memory that inhibits a previously conditioned association. Nonetheless, it could also depend on weakening of the original memory trace if extinction is assumed to have multiple components. The phosphatase calcineurin (CaN) has been described as being involved in extinction but not in the initial consolidation of fear learning. With this in mind, we set to study whether CaN could have different roles in distinct components of extinction. Systemic treatment with the CaN inhibitors cyclosporin A (CsA) or FK-506, as well as i.c.v. administration of CsA, blocked within-session, but not between-session extinction or initial learning of contextual fear conditioning. Similar effects were found in multiple-session extinction of contextual fear conditioning and in auditory fear conditioning, indicating that CaN is involved in different types of short-term extinction. Meanwhile, inhibition of protein synthesis by cycloheximide (CHX) treatment did not affect within-session extinction, but disrupted fear acquisition and slightly impaired between-session extinction. Our results point to a dissociation of within- and between-session extinction of fear conditioning, with the former being more dependent on CaN activity and the latter on protein synthesis. Moreover, the modulation of within-session extinction did not affect between-session extinction, suggesting that these components are at least partially independent. PMID:25691516
A hybrid Reynolds averaged/PDF closure model for supersonic turbulent combustion
NASA Technical Reports Server (NTRS)
Frankel, Steven H.; Hassan, H. A.; Drummond, J. Philip
1990-01-01
A hybrid Reynolds averaged/assumed pdf approach has been developed and applied to the study of turbulent combustion in a supersonic mixing layer. This approach is used to address the 'laminar-like' treatment of the thermochemical terms that appear in the conservation equations. Calculations were carried out for two experiments involving H2-air supersonic turbulent mixing. Two different forms of the pdf were implemented. In general, the results show modest improvement from previous calculations. Moreover, the results appear to be somewhat independent of the form of the assumed pdf.
Zheng, Jie; Gaunt, Tom R; Day, Ian N M
2013-01-01
Genome-Wide Association Studies (GWAS) frequently incorporate meta-analysis within their framework. However, conditional analysis of individual-level data, which is an established approach for fine mapping of causal sites, is often precluded where only group-level summary data are available for analysis. Here, we present a numerical and graphical approach, "sequential sentinel SNP regional association plot" (SSS-RAP), which estimates regression coefficients (beta) with their standard errors using the meta-analysis summary results directly. Under an additive model, typical for genes with small effect, the effect for a sentinel SNP can be transformed to the predicted effect for a possibly dependent SNP through a 2×2 2-SNP haplotypes table. The approach assumes Hardy-Weinberg equilibrium for test SNPs. SSS-RAP is available as a Web-tool (http://apps.biocompute.org.uk/sssrap/sssrap.cgi). To develop and illustrate SSS-RAP we analyzed lipid and ECG traits data from the British Women's Heart and Health Study (BWHHS), evaluated a meta-analysis for ECG trait and presented several simulations. We compared results with existing approaches such as model selection methods and conditional analysis. Generally findings were consistent. SSS-RAP represents a tool for testing independence of SNP association signals using meta-analysis data, and is also a convenient approach based on biological principles for fine mapping in group level summary data. © 2012 Blackwell Publishing Ltd/University College London.
Biewener, Andrew A.; Wakeling, James M.
2017-01-01
ABSTRACT Hill-type models are ubiquitous in the field of biomechanics, providing estimates of a muscle's force as a function of its activation state and its assumed force–length and force–velocity properties. However, despite their routine use, the accuracy with which Hill-type models predict the forces generated by muscles during submaximal, dynamic tasks remains largely unknown. This study compared human gastrocnemius forces predicted by Hill-type models with the forces estimated from ultrasound-based measures of tendon length changes and stiffness during cycling, over a range of loads and cadences. We tested both a traditional model, with one contractile element, and a differential model, with two contractile elements that accounted for independent contributions of slow and fast muscle fibres. Both models were driven by subject-specific, ultrasound-based measures of fascicle lengths, velocities and pennation angles and by activation patterns of slow and fast muscle fibres derived from surface electromyographic recordings. The models predicted, on average, 54% of the time-varying gastrocnemius forces estimated from the ultrasound-based methods. However, differences between predicted and estimated forces were smaller under low speed–high activation conditions, with models able to predict nearly 80% of the gastrocnemius force over a complete pedal cycle. Additionally, the predictions from the Hill-type muscle models tested here showed that a similar pattern of force production could be achieved for most conditions with and without accounting for the independent contributions of different muscle fibre types. PMID:28202584
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roldan, Omar; Quartin, Miguel; Notari, Alessio, E-mail: oaroldan@if.ufrj.br, E-mail: notari@ffn.ub.es, E-mail: mquartin@if.ufrj.br
The aberration and Doppler coupling effects of the Cosmic Microwave Background (CMB) were recently measured by the Planck satellite. The most straightforward interpretation leads to a direct detection of our peculiar velocity β, consistent with the measurement of the well-known dipole. In this paper we discuss the assumptions behind such interpretation. We show that Doppler-like couplings appear from two effects: our peculiar velocity and a second order large-scale effect due to the dipolar part of the gravitational potential. We find that the two effects are exactly degenerate but only if we assume second-order initial conditions from single-field Inflation. Thus, detectingmore » a discrepancy in the value of β from the dipole and the Doppler couplings implies the presence of a primordial non-Gaussianity. We also show that aberration-like signals likewise arise from two independent effects: our peculiar velocity and lensing due to a first order large-scale dipolar gravitational potential, independently on Gaussianity of the initial conditions. In general such effects are not degenerate and so a discrepancy between the measured β from the dipole and aberration could be accounted for by a dipolar gravitational potential. Only through a fine-tuning of the radial profile of the potential it is possible to have a complete degeneracy with a boost effect. Finally we discuss that we also expect other signatures due to integrated second order terms, which may be further used to disentangle this scenario from a simple boost.« less
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Grandahl, Kasper; Suadicani, Poul; Jacobsen, Peter
2012-08-01
International studies have shown blood lead at levels causing health concern in recreational indoor shooters. We hypothesized that Danish recreational indoor shooters would also have a high level of blood lead, and that this could be explained by shooting characteristics and the physical environment at the shooting range. This was an environmental case study of 58 male and female shooters from two indoor shooting ranges with assumed different ventilation and cleaning conditions. Information was obtained on general conditions including age, gender, tobacco and alcohol use, and shooting conditions: weapon type, number of shots fired, frequency of stays at the shooting range and hygiene habits. A venous blood sample was drawn to determine blood lead concentrations; 14 non-shooters were included as controls. Almost 60% of the shooters, hereof five out of 14 women, had a blood lead concentration above 0.48 micromol/l, a level causing long-term health concern. All controls had blood lead values below 0.17 micromol/l. Independent significant associations with blood lead concentrations above 0.48 micromol/l were found for shooting at a poorly ventilated range, use of heavy calibre weapons, number of shots and frequency of stays at the shooting range. A large proportion of Danish recreational indoor shooters had potentially harmful blood lead concentrations. Ventilation, amounts of shooting, use of heavy calibre weapons and stays at the shooting ranges were independently associated with increased blood lead. The technical check at the two ranges was performed by the Danish Technological Institute and costs were defrayed by the Danish Rifle Association. To pay for the analyses of blood lead, the study was supported by the The Else & Mogens Wedell-Wedellsborg Foundation. The Danish Regional Capital Scientific Ethics Committee approved the study, protocol number H-4-2010-130.
Schumacher, Jörg; Behrends, Volker; Pan, Zhensheng; Brown, Dan R.; Heydenreich, Franziska; Lewis, Matthew R.; Bennett, Mark H.; Razzaghi, Banafsheh; Komorowski, Michal; Barahona, Mauricio; Stumpf, Michael P. H.; Wigneshweraraj, Sivaramesh; Bundy, Jacob G.; Buck, Martin
2013-01-01
ABSTRACT Nitrogen regulation in Escherichia coli is a model system for gene regulation in bacteria. Growth on glutamine as a sole nitrogen source is assumed to be nitrogen limiting, inferred from slow growth and strong NtrB/NtrC-dependent gene activation. However, we show that under these conditions, the intracellular glutamine concentration is not limiting but 5.6-fold higher than in ammonium-replete conditions; in addition, α-ketoglutarate concentrations are elevated. We address this glutamine paradox from a systems perspective. We show that the dominant role of NtrC is to regulate glnA transcription and its own expression, indicating that the glutamine paradox is not due to NtrC-independent gene regulation. The absolute intracellular NtrC and GS concentrations reveal molecular control parameters, where NtrC-specific activities were highest in nitrogen-starved cells, while under glutamine growth, NtrC showed intermediate specific activity. We propose an in vivo model in which α-ketoglutarate can derepress nitrogen regulation despite nitrogen sufficiency. PMID:24255125
Feasibility of Sensing Tropospheric Ozone with MODIS 9.6 Micron Observations
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Moon-Yoo, Jung
2004-01-01
With the infrared observations made by the Moderate Resolution Imaging Spectrometer (MODIS) on board the EOS-Aqua satellite, which include the 9.73 micron channel, a method is developed to deduce horizontal patterns of tropospheric ozone in cloud free conditions on a scale of about 100 km. It is assumed that on such small scale, at a given instant, horizontal changes in stratospheric ozone are small compared to that in the troposphere. From theoretical simulations it is found that uncertainties in the land surface emissivity and the vertical thermal stratification in the troposphere can lead to significant errors in the inferred tropospheric ozone. Because of this reason in order to derive horizontal patterns of tropospheric ozone in a given geographic area a tuning of this method is necessary with the help of a few dependent cases. After tuning, this method is applied to independent cases of MODIS data taken over Los Angeles basin in cloud free conditions to derive horizontal distribution of ozone in the troposphere. Preliminary results indicate that the derived patterns of ozone resemble crudely the patterns of surface ozone reported by EPA.
Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.
Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J
2017-09-01
Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.
Do Martian Blueberries Have Pits? -- Artifacts of an Early Wet Mars
NASA Astrophysics Data System (ADS)
Lerman, L.
2005-03-01
Early Martian weather cycles would have supported organic chemical self-organization, the assumed predecessor to an independent "origin" of Martian life. Artifacts of these processes are discussed, including the possibility that Martian blueberries nucleated around organic cores.
21 CFR 312.3 - Definitions and interpretations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... research organization means a person that assumes, as an independent contractor with the sponsor, one or... individual or pharmaceutical company, governmental agency, academic institution, private organization, or other organization. The sponsor does not actually conduct the investigation unless the sponsor is a...
21 CFR 312.3 - Definitions and interpretations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... research organization means a person that assumes, as an independent contractor with the sponsor, one or... individual or pharmaceutical company, governmental agency, academic institution, private organization, or other organization. The sponsor does not actually conduct the investigation unless the sponsor is a...
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
Response moderation models for conditional dependence between response time and response accuracy.
Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan
2017-05-01
It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.
Numerical investigation of optimal layout of rockbolts for ground structures
NASA Astrophysics Data System (ADS)
Kato, Junji; Ishi, Keiichiro; Terada, Kenjiro; Kyoya, Takashi
Due to difficulty to obtain reliable ground data, layout of rockbolts is determined entirely in a classical way assuming an isotropic rock stress condition. The present study assumes anisotropic stress condition and optimizes layout of rockbolts in order to maximize the stiffness of unstable ground of tunnels and slopes by applying multiphase layout optimization. It was verified that this method has a certain possibility to improve the stiffness of unstable ground.
Seitz, A.C.; Loher, Timothy; Norcross, Brenda L.; Nielsen, J.L.
2011-01-01
Currently, it is assumed that eastern Pacific halibut Hippoglossus stenolepis belong to a single, fully mixed population extending from California through the Bering Sea, in which adult halibut disperse randomly throughout their range during their lifetime. However, we hypothesize that hali but dispersal is more complex than currently assumed and is not spatially random. To test this hypo thesis, we studied the seasonal dispersal and behavior of Pacific halibut in the Bering Sea and Aleutian Islands (BSAI). Pop-up Archival Transmitting tags attached to halibut (82 to 154 cm fork length) during the summer provided no evidence that individuals moved out of the Bering Sea and Aleutian Islands region into the Gulf of Alaska during the mid-winter spawning season, supporting the concept that this region contains a separate spawning group of adult halibut. There was evidence for geographically localized groups of halibut along the Aleutian Island chain, as all of the individuals tagged there displayed residency, with their movements possibly impeded by tidal currents in the passes between islands. Mid-winter aggregation areas of halibut are assumed to be spawning grounds, of which 2 were previously unidentified and extend the species' presumed spawning range ~1000 km west and ~600 km north of the nearest documented spawning area. If there are indeed independent spawning groups of Pacific halibut in the BSAI, their dynamics may vary sufficiently from those of the Gulf of Alaska, so that specifically accounting for their relative segregation and unique dynamics within the larger population model will be necessary for correctly predicting how these components may respond to fishing pressure and changing environmental conditions.?? Inter-Research 2011.
A mixture of seven antiandrogens induces reproductive malformations in rats.
To date, regulatory agencies have not considered conducting cumulative risk assessments for mixtures of chemicals with diverse mechanisms of toxicity because it is assumed that the chemicals will act independently and the individual chemical doses are not additive. However, this ...
NASA Technical Reports Server (NTRS)
Gibbons, D. E.; Richard, R. R.
1979-01-01
The methods used to calculate the sensitivity parameter noise equivalent reflectance of a remote-sensing scanner are explored, and the results are compared with values measured over calibrated test sites. Data were acquired on four occasions covering a span of 4 years and providing various atmospheric conditions. One of the calculated values was based on assumed atmospheric conditions, whereas two others were based on atmospheric models. Results indicate that the assumed atmospheric conditions provide useful answers adequate for many purposes. A nomograph was developed to indicate sensitivity variations due to geographic location, time of day, and season.
Parametric study of statistical bias in laser Doppler velocimetry
NASA Technical Reports Server (NTRS)
Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle
1989-01-01
Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.
1984-10-26
test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the
A new method for wind speed forecasting based on copula theory.
Wang, Yuankun; Ma, Huiqun; Wang, Dong; Wang, Guizuo; Wu, Jichun; Bian, Jinyu; Liu, Jiufu
2018-01-01
How to determine representative wind speed is crucial in wind resource assessment. Accurate wind resource assessments are important to wind farms development. Linear regressions are usually used to obtain the representative wind speed. However, terrain flexibility of wind farm and long distance between wind speed sites often lead to low correlation. In this study, copula method is used to determine the representative year's wind speed in wind farm by interpreting the interaction of the local wind farm and the meteorological station. The result shows that the method proposed here can not only determine the relationship between the local anemometric tower and nearby meteorological station through Kendall's tau, but also determine the joint distribution without assuming the variables to be independent. Moreover, the representative wind data can be obtained by the conditional distribution much more reasonably. We hope this study could provide scientific reference for accurate wind resource assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
String theory of the Regge intercept.
Hellerman, S; Swanson, I
2015-03-20
Using the Polchinski-Strominger effective string theory in the covariant gauge, we compute the mass of a rotating string in D dimensions with large angular momenta J, in one or two planes, in fixed ratio, up to and including first subleading order in the large J expansion. This constitutes a first-principles calculation of the value for the order-J(0) contribution to the mass squared of a meson on the leading Regge trajectory in planar QCD with bosonic quarks. For open strings with Neumann boundary conditions, and for closed strings in D≥5, the order-J(0) term in the mass squared is exactly calculated by the semiclassical approximation. This term in the expansion is universal and independent of the details of the theory, assuming only D-dimensional Poincaré invariance and the absence of other infinite-range excitations on the string world volume, beyond the Nambu-Goldstone bosons.
Getting ahead of one's self? The common culture of immunology and philosophy.
Anderson, Warwick
2014-09-01
During the past thirty years, immunological metaphors, motifs, and models have come to shape much social theory and philosophy. Immunology, so it seems, often has served to naturalize claims about self, identity, and sovereignty--perhaps most prominently in Jacques Derrida's later studies. Yet the immunological science that functions as "nature" in these social and philosophical arguments is derived from interwar and Cold War social theory and philosophy. Theoretical immunologists and social theorists knowingly participated in a common culture. Thus the "naturalistic fallacy" in this case might be reframed as an error of categorization: its conditions of possibility would require ceaseless effort to purify and separate out the categories of nature and culture. The problem--inasmuch as there is a problem-therefore is not so much the making of an appeal to nature as assuming privileged access to an independent, sovereign category called "nature".
Power learning or path dependency? Investigating the roots of the European Food Safety Authority.
Roederer-Rynning, Christilla; Daugbjerg, Carsten
2010-01-01
A key motive for establishing the European Food Safety Authority (EFSA) was restoring public confidence in the wake of multiplying food scares and the BSE crisis. Scholars, however, have paid little attention to the actual political and institutional logics that shaped this new organization. This article explores the dynamics underpinning the making of EFSA. We examine the way in which learning and power shaped its organizational architecture. It is demonstrated that the lessons drawn from the past and other models converged on the need to delegate authority to an external agency, but diverged on its mandate, concretely whether or not EFSA should assume risk management responsibilities. In this situation of competitive learning, power and procedural politics conditioned the mandate granted to EFSA. The European Commission, the European Parliament and the European Council shared a common interest in preventing the delegation of regulatory powers to an independent EU agency in food safety policy.
The relevance of public health research for practice: A 30-year perspective.
Diderichsen, Finn
2018-06-01
The Nordic context where public health responsibility is strongly devolved to municipalities raises specific demands on public health research. The demands for causal inference of disease aetiology and intervention efficacy is not different, but in addition there is a need for population health science that describes local prevalence, distribution and clustering of determinants. Knowledge of what interventions and policies work, for whom and under what conditions is essential, but instead of assuming context independence and demanding high external validity it is important to understand how contextual factors linked to groups and places modify both effects and implementation. More implementation studies are needed, but the infrastructure for that research in terms of theories and instruments for monitoring implementation is needed. Much of this was true also 30 years ago, but with increasing spending on both public health research and practice, the demands are increasing that major improvement of population health and health equity are actually achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, John; Evans, Jason L.; Nagata, Natsumi
We reconsider the minimal SU( 5) grand unified theory (GUT) in the context of no-scale supergravity inspired by string compactification scenarios, assuming that the soft supersymmetry-breaking parameters satisfy universality conditions at some input scale M in above the GUT scale M GUT. When setting up such a no-scale super-GUT model, special attention must be paid to avoiding the Scylla of rapid proton decay and the Charybdis of an excessive density of cold dark matter, while also having an acceptable mass for the Higgs boson. Furthermore, we do not find consistent solutions if none of the matter and Higgs fields aremore » assigned to twisted chiral supermultiplets, even in the presence of Giudice–Masiero terms. But, consistent solutions may be found if at least one fiveplet of GUT Higgs fields is assigned to a twisted chiral supermultiplet, with a suitable choice of modular weights. Spin-independent dark matter scattering may be detectable in some of these consistent solutions.« less
Subliminal speech perception and auditory streaming.
Dupoux, Emmanuel; de Gardelle, Vincent; Kouider, Sid
2008-11-01
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).
Self-organization of cosmic radiation pressure instability
NASA Technical Reports Server (NTRS)
Hogan, Craig J.
1991-01-01
Under some circumstances the absorption of radiation momentum by an absorbing medium opens the possibility of a dynamical instability, sometimes called 'mock gravity'. Here, a simplified abstract model is studied in which the radiation source is assumed to remain spatially uniform, there is no reabsorption or reradiated light, and no forces other than radiative pressure act on the absorbing medium. It is shown that this model displays the unique feature of being not only unstable, but also self-organizing. The structure approaches a statistical dynamical steady state which is almost independent of initial conditions. In this saturated state the absorbers are concentrated in thin walls around empty bubbles; as the instability develops the big bubbles get bigger and the small ones get crushed and disappear. A linear analysis shows that to first order the thin walls are indeed stable structures. It is speculated that this instability may play a role in forming cosmic large-scale structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom P.; Fujita, K. Sydny
This report examines the sensitivity of annual vehicle miles of travel (VMT) of light-duty vehicles to the price of gasoline, commonly referred to as the elasticity of demand for VMT to the price of gasoline; the fuel-economy-related rebound effect is generally assumed to be of the same magnitude as the VMT elasticity of gas price or driving cost. We use detailed odometer readings from over 30 million vehicles in four urban areas of Texas, over a six-year period. We account for economic conditions over this period, as well as vehicle age. Following the literature we include fixed effects by vehiclemore » make and individual vehicle, as well as the effect of adding an instrument to predict monthly gasoline price independent of any influences of demand for gasoline on its price.« less
Stochastic Dynamics of Lexicon Learning in an Uncertain and Nonuniform World
NASA Astrophysics Data System (ADS)
Reisenauer, Rainer; Smith, Kenny; Blythe, Richard A.
2013-06-01
We study the time taken by a language learner to correctly identify the meaning of all words in a lexicon under conditions where many plausible meanings can be inferred whenever a word is uttered. We show that the most basic form of cross-situational learning—whereby information from multiple episodes is combined to eliminate incorrect meanings—can perform badly when words are learned independently and meanings are drawn from a nonuniform distribution. If learners further assume that no two words share a common meaning, we find a phase transition between a maximally efficient learning regime, where the learning time is reduced to the shortest it can possibly be, and a partially efficient regime where incorrect candidate meanings for words persist at late times. We obtain exact results for the word-learning process through an equivalence to a statistical mechanical problem of enumerating loops in the space of word-meaning mappings.
An instrumental variable random-coefficients model for binary outcomes
Chesher, Andrew; Rosen, Adam M
2014-01-01
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048
Attention and implicit memory.
Spataro, Pietro; Mulligan, Neil W; Rossi-Arnaud, Clelia
2011-01-01
The distinction between identification and production priming assumes that tasks based on production processes involve two distinct stages: the activation of multiple solutions and the following selection of a final response. Previous research demonstrated that divided attention reduced production but not identification priming. However, an unresolved issue concerns whether the activation of candidate solutions is sufficient to account for the enhanced request of attentional resources, independently from the contribution of selection processes. The present paper investigated this question by using a version of the lexical decision task (LDT) in which the target words had either many or few orthographic neighbors. Two experiments showed that the effects of divided and selective attention were equivalent in both conditions, suggesting that the inclusion of a process of generation of multiple solutions in the LDT is not sufficient to increase the amount of cognitive resources needed to achieve full priming to the levels of production tasks.
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Contribution of Glucose Transport to the Control of the Glycolytic Flux in Trypanosoma brucei
NASA Astrophysics Data System (ADS)
Bakker, Barbara M.; Walsh, Michael C.; Ter Kuile, Benno H.; Mensonides, Femke I. C.; Michels, Paul A. M.; Opperdoes, Fred R.; Westerhoff, Hans V.
1999-08-01
The rate of glucose transport across the plasma membrane of the bloodstream form of Trypanosoma brucei was modulated by titration of the hexose transporter with the inhibitor phloretin, and the effect on the glycolytic flux was measured. A rapid glucose uptake assay was developed to measure the transport activity independently of the glycolytic flux. Phloretin proved a competitive inhibitor. When the effect of the intracellular glucose concentration on the inhibition was taken into account, the flux control coefficient of the glucose transporter was between 0.3 and 0.5 at 5 mM glucose. Because the flux control coefficients of all steps in a metabolic pathway sum to 1, this result proves that glucose transport is not the rate-limiting step of trypanosome glycolysis. Under physiological conditions, transport shares the control with other steps. At glucose concentrations much lower than physiological, the glucose carrier assumed all control, in close agreement with model predictions.
The evolution of generalized reciprocity in social interaction networks.
Voelkl, Bernhard
2015-09-01
Generalized reciprocity has been proposed as a mechanism for enabling continued cooperation between unrelated individuals. It can be described by the simple rule "help somebody if you received help from someone", and as it does not require individual recognition, complex cognition or extended memory capacities, it has the potential to explain cooperation in a large number of organisms. In a panmictic population this mechanism is vulnerable to defection by individuals who readily accept help but do not help themselves. Here, I investigate to what extent the limitation of social interactions to a social neighborhood can lead to conditions that favor generalized reciprocity in the absence of population structuring. It can be shown that cooperation is likely to evolve if one assumes certain sparse interaction graphs, if strategies are discrete, and if spontaneous helping and reciprocating are independently inherited. Copyright © 2015 Elsevier Inc. All rights reserved.
Traces of business cycles in credit-rating migrations
Boreiko, Dmitri; Kaniovski, Serguei; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor’s (S&P’s) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities. PMID:28426758
Production of RNA by a polymerase protein encapsulated within phospholipid vesicles
NASA Technical Reports Server (NTRS)
Chakrabarti, A. C.; Breaker, R. R.; Joyce, G. F.; Deamer, D. W.
1994-01-01
Catalyzed polymerization reactions represent a primary anabolic activity of all cells. It can be assumed that early cells carried out such reactions, in which macromolecular catalysts were encapsulated within some type of boundary membrane. In the experiments described here, we show that a template-independent RNA polymerase (polynucleotide phosphorylase) can be encapsulated in dimyristoyl phosphatidylcholine vesicles without substrate. When the substrate adenosine diphosphate (ADP) was provided externally, long-chain RNA polymers were synthesized within the vesicles. Substrate flux was maximized by maintaining the vesicles at the phase transition temperature of the component lipid. A protease was introduced externally as an additional control. Free enzyme was inactivated under identical conditions. RNA products were visualized in situ by ethidium bromide fluorescence. The products were harvested from the liposomes, radiolabeled, and analyzed by polyacrylamide gel electrophoresis. Encapsulated catalysts represent a model for primitive cellular systems in which an RNA polymerase was entrapped within a protected microenvironment.
Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material
NASA Astrophysics Data System (ADS)
Upadhyay, Ashwani; Chandramohan, V. P.
2018-04-01
A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.
Traces of business cycles in credit-rating migrations.
Boreiko, Dmitri; Kaniovski, Serguei; Kaniovski, Yuri; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor's (S&P's) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities.
Low, Diana H P; Motakis, Efthymios
2013-10-01
Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.
Sacks, Laura A.; Lee, Terrie M.; Swancar, Amy
2013-01-01
Groundwater inflow to a subtropical seepage lake was estimated using a transient isotope-balance approach for a decade (2001–2011) with wet and dry climatic extremes. Lake water δ18O ranged from +0.80 to +3.48 ‰, reflecting the 4 m range in stage. The transient δ18O analysis discerned large differences in semiannual groundwater inflow, and the overall patterns of low and high groundwater inflow were consistent with an independent water budget. Despite simplifying assumptions that the isotopic composition of precipitation (δP), groundwater inflow, and atmospheric moisture (δA) were constant, groundwater inflow was within the water-budget error for 12 of the 19 semiannual calculation periods. The magnitude of inflow was over or under predicted during periods of climatic extreme. During periods of high net precipitation from tropical cyclones and El Niño conditions, δP values were considerably more depleted in 18O than assumed. During an extreme dry period, δA values were likely more enriched in 18O than assumed due to the influence of local lake evaporate. Isotope balance results were most sensitive to uncertainties in relative humidity, evaporation, and δ18O of lake water, which can limit precise quantification of groundwater inflow. Nonetheless, the consistency between isotope-balance and water-budget results indicates that this is a viable approach for lakes in similar settings, allowing the magnitude of groundwater inflow to be estimated over less-than-annual time periods. Because lake-water δ18O is a good indicator of climatic conditions, these data could be useful in ground-truthing paleoclimatic reconstructions using isotopic data from lake cores in similar settings.
Modeling the chemical evolution of nitrogen oxides near roadways
NASA Astrophysics Data System (ADS)
Wang, Yan Jason; DenBleyker, Allison; McDonald-Buller, Elena; Allen, David; Zhang, K. Max
2011-01-01
The chemical evolution of nitrogen dioxide (NO 2) and nitrogen monoxide (NO) in the vicinity of roadways is numerically investigated using a computational fluid dynamics model, CFD-VIT-RIT and a Gaussian-based model, CALINE4. CFD-VIT-RIT couples a standard k- ɛ turbulence model for turbulent mixing and the Finite-Rate model for chemical reactions. CALINE4 employs a discrete parcel method, assuming that chemical reactions are independent of the dilution process. The modeling results are compared to the field measurement data collected near two roadways in Austin, Texas, State Highway 71 (SH-71) and Farm to Market Road 973 (FM-973), under parallel and perpendicular wind conditions during the summer of 2007. In addition to ozone (O 3), other oxidants and reactive species including hydroperoxyl radical (HO 2), organic peroxyl radical (RO 2), formaldehyde (HCHO) and acetaldehyde (CH 3CHO) are considered in the transformation from NO to NO 2. CFD-VIT-RIT is shown to be capable of predicting both NO x and NO 2 profiles downwind. CALINE4 is able to capture the NO x profiles, but underpredicts NO 2 concentrations under high wind velocity. Our study suggests that the initial NO 2/NO x ratios have to be carefully selected based on traffic conditions in order to assess NO 2 concentrations near roadways. The commonly assumed NO 2/NO x ratio by volume of 5% may not be suitable for most roadways, especially those with a high fraction of heavy-duty truck traffic. In addition, high O 3 concentrations and high traffic volumes would lead to the peak NO 2 concentration occurring near roadways with elevated concentrations persistent over a long distance downwind.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
Device-Independent Certification of a Nonprojective Qubit Measurement
NASA Astrophysics Data System (ADS)
Gómez, Esteban S.; Gómez, Santiago; González, Pablo; Cañas, Gustavo; Barra, Johanna F.; Delgado, Aldo; Xavier, Guilherme B.; Cabello, Adán; Kleinmann, Matthias; Vértesi, Tamás; Lima, Gustavo
2016-12-01
Quantum measurements on a two-level system can have more than two independent outcomes, and in this case, the measurement cannot be projective. Measurements of this general type are essential to an operational approach to quantum theory, but so far, the nonprojective character of a measurement can only be verified experimentally by already assuming a specific quantum model of parts of the experimental setup. Here, we overcome this restriction by using a device-independent approach. In an experiment on pairs of polarization-entangled photonic qubits we violate by more than 8 standard deviations a Bell-like correlation inequality that is valid for all sets of two-outcome measurements in any dimension. We combine this with a device-independent verification that the system is best described by two qubits, which therefore constitutes the first device-independent certification of a nonprojective quantum measurement.
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
SU(2)×U(1) gauge invariance and the shape of new physics in rare B decays.
Alonso, R; Grinstein, B; Martin Camalich, J
2014-12-12
New physics effects in B decays are routinely modeled through operators invariant under the strong and electromagnetic gauge symmetries. Assuming the scale for new physics is well above the electroweak scale, we further require invariance under the full standard model gauge symmetry group. Retaining up to dimension-six operators, we unveil new constraints between different new physics operators that are assumed to be independent in the standard phenomenological analyses. We illustrate this approach by analyzing the constraints on new physics from rare B(q) (semi-)leptonic decays.
A new equilibrium torus solution and GRMHD initial conditions
NASA Astrophysics Data System (ADS)
Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh
2013-11-01
Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.
Notes on aerodynamic forces on airship hulls
NASA Technical Reports Server (NTRS)
Tuckerman, L B
1923-01-01
For a first approximation the air flow around the airship hull is assumed to obey the laws of perfect (i.e. free from viscosity) incompressible fluid. The flow is further assumed to be free from vortices (or rotational motion of the fluid). These assumptions lead to very great simplifications of the formulae used but necessarily imply an imperfect picture of the actual conditions. The value of the results depends therefore upon the magnitude of the forces produced by the disturbances in the flow caused by viscosity with the consequent production of vortices in the fluid. If these are small in comparison with the forces due to the assumed irrotational perfect fluid flow the results will give a good picture of the actual conditions of an airship in flight.
14 CFR 23.485 - Side load conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... reaction divided between the main wheels so that— (1) 0.5 (W) is acting inboard on one side; and (2) 0.33... section are assumed to be applied at the ground contact point and the drag loads may be assumed to be zero...
Associative Asymmetry of Compound Words
ERIC Educational Resources Information Center
Caplan, Jeremy B.; Boulton, Kathy L.; Gagné, Christina L.
2014-01-01
Early verbal-memory researchers assumed participants represent memory of a pair of unrelated items with 2 independent, separately modifiable, directional associations. However, memory for pairs of unrelated words (A-B) exhibits associative symmetry: a near-perfect correlation between accuracy on forward (A??) and backward (??B) cued recall. This…
Dimensionality of Social Influence.
ERIC Educational Resources Information Center
Stricker, Lawrence J.; Jackson, Douglas N.
The research reported in this study explores two problematic avenues of conformity research: (1) the widely assumed generality of diverse measures of group pressure, and (2) the dimensionality of conformity, anticonformity, and independence. These two conformity situations, present and nonpresent norm groups, used two tasks (an objective counting…
Price Collusion or Competition in US Higher Education
ERIC Educational Resources Information Center
Gu, Jiafeng
2015-01-01
How geographical neighboring competitors influence the strategic price behaviors of universities is still unclear because previous studies assume spatial independence between universities. Using data from the National Center for Education Statistics college navigator dataset, this study shows that the price of one university is spatially…
Third-Degree Price Discrimination Revisited
ERIC Educational Resources Information Center
Kwon, Youngsun
2006-01-01
The author derives the probability that price discrimination improves social welfare, using a simple model of third-degree price discrimination assuming two independent linear demands. The probability that price discrimination raises social welfare increases as the preferences or incomes of consumer groups become more heterogeneous. He derives the…
Microdosimetric considerations of effects of heavy ions on E. coli K-12 mutants.
Takahashi, T; Yatagai, F; Izumo, K
1992-01-01
The inactivation cross sections of E. coli K-12 recombination-deficient mutants, JC1553 (recA) and AB2470 (recB), for several MeV/u alpha-particles and N ions have been successfully analyzed by Katz's target theory in which radiosensitivity parameter E0 is assumed to be LET independent and equal to D37 for gamma-rays. For E. coli K-12 wild type, AB1157 (rec+, uvr+), however, it is impossible to interpret the inactivation cross section data by an LET-independent E0-value. In the latter case, as in the case of B. subtilis spore, it is necessary to assume that the radiosensitivity of the target for the core of a heavy ion is higher than that for delta-electrons. As well as Waligorski, Hamm and Katz's dose, the dose around the trajectory of an ion based on Tabata and Ito's energy deposition algorithm for electrons has been used in the course of analysis.
Breakdown of separability due to confinement
NASA Astrophysics Data System (ADS)
Man'ko, V. I.; Markovich, L. A.; Messina, A.
2017-12-01
A simple system of two particles in a bidimensional configurational space S is studied. The possibility of breaking in S the time-independent Schrodinger equation of the system into two separated one-dimensional one-body Schrodinger equations is assumed. In this paper, we focus on how the latter property is countered by imposing such boundary conditions as confinement to a limited region of S and/or restrictions on the joint coordinate probability density stemming from the sign-invariance condition of the relative coordinate (an impenetrability condition). Our investigation demonstrates the reducibility of the problem under scrutiny into that of a single particle living in a limited domain of its bidimensional configurational space. These general ideas are illustrated introducing the coordinates Xc and x of the center of mass of two particles and of the associated relative motion, respectively. The effects of the confinement and the impenetrability are then analyzed by studying with the help of an appropriate Green's function and the time evolution of the covariance of Xc and x. Moreover, to calculate the state of a single particle constrained within a square, a rhombus, a triangle and a rectangle, the Green's function expression in terms of Jacobi θ3-function is applied. All the results are illustrated by examples.
Trexler, Joel C.; DeAngelis, Donald L.
2003-01-01
We used analytic and simulation models to determine the ecological conditions favoring evolution of a matrotrophic fish from a lecithotrophic ancestor given a complex set of trade‐offs. Matrotrophy is the nourishment of viviparous embryos by resources provided between fertilization and parturition, while lecithotrophy describes embryo nourishment provided before fertilization. In fishes and reptiles, embryo nourishment encompasses a continuum from solely lecithotrophic to primarily matrotrophic. Matrotrophy has evolved independently from lecithotrophic ancestors many times in many groups. We assumed matrotrophy increased the number of offspring a viviparous female could gestate and evaluated conditions of food availability favoring lecithotrophy or matrotrophy. The matrotrophic strategy was superior when food resources exceeded demand during gestation but at a risk of overproduction and reproductive failure if food intake was limited. Matrotrophic females were leaner during gestation than lecithotrophic females, yielding shorter life spans. Our models suggest that matrotrophic embryo nourishment evolved in environments with high food availability, consistently exceeding energy requirements for maintaining relatively large broods. Embryo abortion with some resorption of invested energy is a necessary preadaptation to the evolution of matrotrophy. Future work should explore trade‐offs of age‐specific mortality and reproductive output for females maintaining different levels of fat storage during gestation.
Furuya, Hiroyuki
2017-12-20
The first domestic outbreak of dengue fever in Japan since 1945 was reported in Tokyo in 2014. Meanwhile, daily mean summer temperatures are expected to continue to rise world-wide. Such conditions are expected to increase the risk of an arbovirus outbreak at the 2020 Tokyo Olympic Games. To address this possibility, the present study compared estimates of the risk of infection by dengue, chikungunya, and Zika viruses in urban areas. To compare the risk of infection by arboviruses transmitted by Ae. albopictus mosquitoes, the reproduction number for each of three arboviruses was estimated under the environmental conditions associated with the 2014 dengue outbreak in Tokyo, and additionally under conditions assuming a daily mean temperature elevation of 2° C. For dengue, chikungunya, and Zika, the estimated distributions of R 0 were independently fitted to gamma distributions yielding median R 0 values of 1.00, 0.46, and 0.36, respectively. If the daily mean temperature were to rise from 28° C to 30° C, our model predicts increases of the median R 0 of 18% for dengue, 4.3% for chikungunya, and 11.1% for Zika. Strengthening of the public health responsivity for these emerging arboviral diseases will be needed in preparation for the 2020 Olympic Games in Tokyo.
Lopatka, Martin; Sigman, Michael E; Sjerps, Marjan J; Williams, Mary R; Vivó-Truyols, Gabriel
2015-07-01
Forensic chemical analysis of fire debris addresses the question of whether ignitable liquid residue is present in a sample and, if so, what type. Evidence evaluation regarding this question is complicated by interference from pyrolysis products of the substrate materials present in a fire. A method is developed to derive a set of class-conditional features for the evaluation of such complex samples. The use of a forensic reference collection allows characterization of the variation in complex mixtures of substrate materials and ignitable liquids even when the dominant feature is not specific to an ignitable liquid. Making use of a novel method for data imputation under complex mixing conditions, a distribution is modeled for the variation between pairs of samples containing similar ignitable liquid residues. Examining the covariance of variables within the different classes allows different weights to be placed on features more important in discerning the presence of a particular ignitable liquid residue. Performance of the method is evaluated using a database of total ion spectrum (TIS) measurements of ignitable liquid and fire debris samples. These measurements include 119 nominal masses measured by GC-MS and averaged across a chromatographic profile. Ignitable liquids are labeled using the American Society for Testing and Materials (ASTM) E1618 standard class definitions. Statistical analysis is performed in the class-conditional feature space wherein new forensic traces are represented based on their likeness to known samples contained in a forensic reference collection. The demonstrated method uses forensic reference data as the basis of probabilistic statements concerning the likelihood of the obtained analytical results given the presence of ignitable liquid residue of each of the ASTM classes (including a substrate only class). When prior probabilities of these classes can be assumed, these likelihoods can be connected to class probabilities. In order to compare the performance of this method to previous work, a uniform prior was assumed, resulting in an 81% accuracy for an independent test of 129 real burn samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling
NASA Technical Reports Server (NTRS)
Sommer, T. P.; So, R. M. C.; Zhang, H. S.
1993-01-01
Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.
Harju, Seth M.; Olson, Chad V.; Dzialak, Matthew R.; Mudd, James P.; Winstead, Jeff B.
2013-01-01
Connectivity of animal populations is an increasingly prominent concern in fragmented landscapes, yet existing methodological and conceptual approaches implicitly assume the presence of, or need for, discrete corridors. We tested this assumption by developing a flexible conceptual approach that does not assume, but allows for, the presence of discrete movement corridors. We quantified functional connectivity habitat for greater sage-grouse (Centrocercus urophasianus) across a large landscape in central western North America. We assigned sample locations to a movement state (encamped, traveling and relocating), and used Global Positioning System (GPS) location data and conditional logistic regression to estimate state-specific resource selection functions. Patterns of resource selection during different movement states reflected selection for sagebrush and general avoidance of rough topography and anthropogenic features. Distinct connectivity corridors were not common in the 5,625 km2 study area. Rather, broad areas functioned as generally high or low quality connectivity habitat. A comprehensive map predicting the quality of connectivity habitat across the study area validated well based on a set of GPS locations from independent greater sage-grouse. The functional relationship between greater sage-grouse and the landscape did not always conform to the idea of a discrete corridor. A more flexible consideration of landscape connectivity may improve the efficacy of management actions by aligning those actions with the spatial patterns by which animals interact with the landscape. PMID:24349241
Harju, Seth M; Olson, Chad V; Dzialak, Matthew R; Mudd, James P; Winstead, Jeff B
2013-01-01
Connectivity of animal populations is an increasingly prominent concern in fragmented landscapes, yet existing methodological and conceptual approaches implicitly assume the presence of, or need for, discrete corridors. We tested this assumption by developing a flexible conceptual approach that does not assume, but allows for, the presence of discrete movement corridors. We quantified functional connectivity habitat for greater sage-grouse (Centrocercus urophasianus) across a large landscape in central western North America. We assigned sample locations to a movement state (encamped, traveling and relocating), and used Global Positioning System (GPS) location data and conditional logistic regression to estimate state-specific resource selection functions. Patterns of resource selection during different movement states reflected selection for sagebrush and general avoidance of rough topography and anthropogenic features. Distinct connectivity corridors were not common in the 5,625 km(2) study area. Rather, broad areas functioned as generally high or low quality connectivity habitat. A comprehensive map predicting the quality of connectivity habitat across the study area validated well based on a set of GPS locations from independent greater sage-grouse. The functional relationship between greater sage-grouse and the landscape did not always conform to the idea of a discrete corridor. A more flexible consideration of landscape connectivity may improve the efficacy of management actions by aligning those actions with the spatial patterns by which animals interact with the landscape.
Anisotropic Poroelasticity in a Rock With Cracks
NASA Astrophysics Data System (ADS)
Wong, Teng-Fong
2017-10-01
Deformation of a saturated rock in the field and laboratory may occur in a broad range of conditions, ranging from undrained to drained. The poromechanical response is often anisotropic, and in a brittle rock, closely related to preexisting and stress-induced cracks. This can be modeled as a rock matrix embedded with an anisotropic system of cracks. Assuming microisotropy, expressions for three of the poroelastic coefficients of a transversely isotropic rock were derived in terms of the crack density tensor. Together with published results for the five effective elastic moduli, this provides a complete micromechanical description of the eight independent poroelastic coefficients of such a cracked rock. Relatively simple expressions were obtained for the Skempton pore pressure tensor, which allow one to infer the crack density tensor from undrained measurement in the laboratory, and also to infer the Biot-Willis effective stress coefficients. The model assumes a dilute concentration of noninteractive penny-shaped cracks, and it shows good agreement with experimental data for Berea sandstone, with crack density values up to 0.6. Whereas predictions on the storage coefficient and normal components of the elastic stiffness tensor also seem reasonable, significant discrepancy between model and measurement was observed regarding the off-diagonal and shear components of the stiffness. A plausible model had been proposed for development of very strong anisotropy in the undrained response of a fault zone, and the model here placed geometric constraints on the associated fracture system.
Predicting rates of inbreeding in populations undergoing selection.
Woolliams, J A; Bijma, P
2000-01-01
Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074
Cost Assessment for Shielding of C3 Type. Facilities
1980-03-01
imperfections and on penetrations . Long-conductor penetrants are assumed to enter the building through a one-quarter-inch thick entry plate and a shielded...Effects 21 3.2.3 Currents from Penetrants 21 3.2.4 Numerical Examples 23 3.3 Design Approach 23 3.3.1 Design Assuming Linear Behavior of Shield 23...General 36 4.1.1 Envelope Shield 36 4.1.2 Penetrations 41 4.2 Condition I, New Construction, External Shield 46 4.3 Condition II, New
Khalagi, Kazem; Mansournia, Mohammad Ali; Rahimi-Movaghar, Afarin; Nourijelyani, Keramat; Amin-Esmaeili, Masoumeh; Hajebi, Ahmad; Sharif, Vandad; Radgoodarzi, Reza; Hefazi, Mitra; Motevalian, Abbas
2016-01-01
Latent class analysis (LCA) is a method of assessing and correcting measurement error in surveys. The local independence assumption in LCA assumes that indicators are independent from each other condition on the latent variable. Violation of this assumption leads to unreliable results. We explored this issue by using LCA to estimate the prevalence of illicit drug use in the Iranian Mental Health Survey. The following three indicators were included in the LCA models: five or more instances of using any illicit drug in the past 12 months (indicator A), any use of any illicit drug in the past 12 months (indicator B), and the self-perceived need of treatment services or having received treatment for a substance use disorder in the past 12 months (indicator C). Gender was also used in all LCA models as a grouping variable. One LCA model using indicators A and B, as well as 10 different LCA models using indicators A, B, and C, were fitted to the data. The three models that had the best fit to the data included the following correlations between indicators: (AC and AB), (AC), and (AC, BC, and AB). The estimated prevalence of illicit drug use based on these three models was 28.9%, 6.2% and 42.2%, respectively. None of these models completely controlled for violation of the local independence assumption. In order to perform unbiased estimations using the LCA approach, the factors violating the local independence assumption (behaviorally correlated error, bivocality, and latent heterogeneity) should be completely taken into account in all models using well-known methods.
Culture and Cognition in Information Technology Education
ERIC Educational Resources Information Center
Holvikivi, Jaana
2007-01-01
This paper aims at explaining the outcomes of information technology education for international students using anthropological theories of cultural schemas. Even though computer science and engineering are usually assumed to be culture-independent, the practice in classrooms seems to indicate that learning patterns depend on culture. The…
Hajnal, A.
1971-01-01
If the continuum hypothesis is assumed, there is a graph G whose vertices form an ordered set of type ω12; G does not contain triangles or complete even graphs of form [[unk]0,[unk]0], and there is no independent subset of vertices of type ω12. PMID:16591893
A cosmology-independent calibration of type Ia supernovae data
NASA Astrophysics Data System (ADS)
Hauret, C.; Magain, P.; Biernaux, J.
2018-06-01
Recently, the common methodology used to transform type Ia supernovae (SNe Ia) into genuine standard candles has been suffering criticism. Indeed, it assumes a particular cosmological model (namely the flat ΛCDM) to calibrate the standardisation corrections parameters, i.e. the dependency of the supernova peak absolute magnitude on its colour, post-maximum decline rate and host galaxy mass. As a result, this assumption could make the data compliant to the assumed cosmology and thus nullify all works previously conducted on model comparison. In this work, we verify the viability of these hypotheses by developing a cosmology-independent approach to standardise SNe Ia data from the recent JLA compilation. Our resulting corrections turn out to be very close to the ΛCDM-based corrections. Therefore, even if a ΛCDM-based calibration is questionable from a theoretical point of view, the potential compliance of SNe Ia data does not happen in practice for the JLA compilation. Previous works of model comparison based on these data do not have to be called into question. However, as this cosmology-independent standardisation method has the same degree of complexity than the model-dependent one, it is worth using it in future works, especially if smaller samples are considered, such as the superluminous type Ic supernovae.
Rossi, Andrea P; Facchinetti, Roberto; Ferrari, Elena; Nori, Nicole; Sant, Selena; Masciocchi, Elena; Zoico, Elena; Fantin, Francesco; Mazzali, Gloria; Zamboni, Mauro
2018-05-14
There is a general lack of studies evaluating medication adherence with self-report scales for elderly patients in treatment with direct oral anticoagulants (DOACs). The aim of the study was to assess the degree of adherence to DOAC therapy in a population of elderly outpatients aged 65 years or older affected by non-valvular atrial fibrillation (NVAF), using the 4-item Morisky Medication Adherence Scale, and to identify potential factors, including the geriatric multidimensional evaluation, which can affect adherence in the study population. A total of 103 subjects, anticoagulated with DOACs for NVAF in primary or secondary prevention, were eligible; 76 showed adequate adhesion to anticoagulant therapy, while 27 showed inadequate adherence. Participants underwent biochemical assessment and Morisky Scale, Instrumental Activities of Daily Living, CHA2DS2-VASc, HAS-BLED, mental status and nutritional evaluations were performed. 2% of subjects assumed Dabigatran at low dose, while 7.8% at standard dose, 9.7% assumed low-dose of Rivaroxaban and 30.1% at standard dose, 6.8% assumed Apixaban at low dose and 39.7% at standard dose, and finally 1% assumed Edoxaban at low dose and 2.9% at standard dose. Most subjects took the DOACs without help (80.6%), while 16 subjects were helped by a family member (15.5%) and 4 were assisted by a caregiver (3.9%). Binary logistic regression considered inappropriate adherence as a dependent variable, while age, male sex, polypharmacotherapy, cognitive decay, caregiver help for therapy assumption, duration of DOAC therapy and double daily administration were considered as independent variables. The double daily administration was an independent factor, determining inappropriate adherence with an OR of 2.88 (p = 0.048, CI 1.003-8.286).
Sign Tracking, but Not Goal Tracking, is Resistant to Outcome Devaluation
Morrison, Sara E.; Bamkole, Michael A.; Nicola, Saleem M.
2015-01-01
During Pavlovian conditioning, a conditioned stimulus (CS) may act as a predictor of a reward to be delivered in another location. Individuals vary widely in their propensity to engage with the CS (sign tracking) or with the site of eventual reward (goal tracking). It is often assumed that sign tracking involves the association of the CS with the motivational value of the reward, resulting in the CS acquiring incentive value independent of the outcome. However, experimental evidence for this assumption is lacking. In order to test the hypothesis that sign tracking behavior does not rely on a neural representation of the outcome, we employed a reward devaluation procedure. We trained rats on a classic Pavlovian paradigm in which a lever CS was paired with a sucrose reward, then devalued the reward by pairing sucrose with illness in the absence of the CS. We found that sign tracking behavior was enhanced, rather than diminished, following reward devaluation; thus, sign tracking is clearly independent of a representation of the outcome. In contrast, goal tracking behavior was decreased by reward devaluation. Furthermore, when we divided rats into those with high propensity to engage with the lever (sign trackers) and low propensity to engage with the lever (goal trackers), we found that nearly all of the effects of devaluation could be attributed to the goal trackers. These results show that sign tracking and goal tracking behavior may be the output of different associative structures in the brain, providing insight into the mechanisms by which reward-associated stimuli—such as drug cues—come to exert control over behavior in some individuals. PMID:26733783
Dick, Taylor J M; Biewener, Andrew A; Wakeling, James M
2017-05-01
Hill-type models are ubiquitous in the field of biomechanics, providing estimates of a muscle's force as a function of its activation state and its assumed force-length and force-velocity properties. However, despite their routine use, the accuracy with which Hill-type models predict the forces generated by muscles during submaximal, dynamic tasks remains largely unknown. This study compared human gastrocnemius forces predicted by Hill-type models with the forces estimated from ultrasound-based measures of tendon length changes and stiffness during cycling, over a range of loads and cadences. We tested both a traditional model, with one contractile element, and a differential model, with two contractile elements that accounted for independent contributions of slow and fast muscle fibres. Both models were driven by subject-specific, ultrasound-based measures of fascicle lengths, velocities and pennation angles and by activation patterns of slow and fast muscle fibres derived from surface electromyographic recordings. The models predicted, on average, 54% of the time-varying gastrocnemius forces estimated from the ultrasound-based methods. However, differences between predicted and estimated forces were smaller under low speed-high activation conditions, with models able to predict nearly 80% of the gastrocnemius force over a complete pedal cycle. Additionally, the predictions from the Hill-type muscle models tested here showed that a similar pattern of force production could be achieved for most conditions with and without accounting for the independent contributions of different muscle fibre types. © 2017. Published by The Company of Biologists Ltd.
A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome
Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.
2014-01-01
This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416
2017-01-01
Conductive polymer composites are manufactured by randomly dispersing conductive particles along an insulating polymer matrix. Several authors have attempted to model the piezoresistive response of conductive polymer composites. However, all the proposed models rely upon experimental measurements of the electrical resistance at rest state. Similarly, the models available in literature assume a voltage-independent resistance and a stress-independent area for tunneling conduction. With the aim of developing and validating a more comprehensive model, a test bench capable of exerting controlled forces has been developed. Commercially available sensors—which are manufactured from conductive polymer composites—have been tested at different voltages and stresses, and a model has been derived on the basis of equations for the quantum tunneling conduction through thin insulating film layers. The resistance contribution from the contact resistance has been included in the model together with the resistance contribution from the conductive particles. The proposed model embraces a voltage-dependent behavior for the composite resistance, and a stress-dependent behavior for the tunneling conduction area. The proposed model is capable of predicting sensor current based upon information from the sourcing voltage and the applied stress. This study uses a physical (non-phenomenological) approach for all the phenomena discussed here. PMID:28906467
Isothermal chemical denaturation of large proteins: Path-dependence and irreversibility.
Wafer, Lucas; Kloczewiak, Marek; Polleck, Sharon M; Luo, Yin
2017-12-15
State functions (e.g., ΔG) are path independent and quantitatively describe the equilibrium states of a thermodynamic system. Isothermal chemical denaturation (ICD) is often used to extrapolate state function parameters for protein unfolding in native buffer conditions. The approach is prudent when the unfolding/refolding processes are path independent and reversible, but may lead to erroneous results if the processes are not reversible. The reversibility was demonstrated in several early studies for smaller proteins, but was assumed in some reports for large proteins with complex structures. In this work, the unfolding/refolding of several proteins were systematically studied using an automated ICD instrument. It is shown that: (i) the apparent unfolding mechanism and conformational stability of large proteins can be denaturant-dependent, (ii) equilibration times for large proteins are non-trivial and may introduce significant error into calculations of ΔG, (iii) fluorescence emission spectroscopy may not correspond to other methods, such as circular dichroism, when used to measure protein unfolding, and (iv) irreversible unfolding and hysteresis can occur in the absence of aggregation. These results suggest that thorough confirmation of the state functions by, for example, performing refolding experiments or using additional denaturants, is needed when quantitatively studying the thermodynamics of protein unfolding using ICD. Copyright © 2017 Elsevier Inc. All rights reserved.
Cong, Fengyu; Lin, Qiu-Hua; Astikainen, Piia; Ristaniemi, Tapani
2014-10-30
It is well-known that data of event-related potentials (ERPs) conform to the linear transform model (LTM). For group-level ERP data processing using principal/independent component analysis (PCA/ICA), ERP data of different experimental conditions and different participants are often concatenated. It is theoretically assumed that different experimental conditions and different participants possess the same LTM. However, how to validate the assumption has been seldom reported in terms of signal processing methods. When ICA decomposition is globally optimized for ERP data of one stimulus, we gain the ratio between two coefficients mapping a source in brain to two points along the scalp. Based on such a ratio, we defined a relative mapping coefficient (RMC). If RMCs between two conditions for an ERP are not significantly different in practice, mapping coefficients of this ERP between the two conditions are statistically identical. We examined whether the same LTM of ERP data could be applied for two different stimulus types of fearful and happy facial expressions. They were used in an ignore oddball paradigm in adult human participants. We found no significant difference in LTMs (based on ICASSO) of N170 responses to the fearful and the happy faces in terms of RMCs of N170. We found no methods for straightforward comparison. The proposed RMC in light of ICA decomposition is an effective approach for validating the similarity of LTMs of ERPs between experimental conditions. This is very fundamental to apply group-level PCA/ICA to process ERP data. Copyright © 2014 Elsevier B.V. All rights reserved.
A Dual Coding View of Vocabulary Learning
ERIC Educational Resources Information Center
Sadoski, Mark
2005-01-01
A theoretical perspective on acquiring sight vocabulary and developing meaningful vocabulary is presented. Dual Coding Theory assumes that cognition occurs in two independent but connected codes: a verbal code for language and a nonverbal code for mental imagery. The mixed research literature on using pictures in teaching sight vocabulary is…
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Knowledge Discovery from Relations
ERIC Educational Resources Information Center
Guo, Zhen
2010-01-01
A basic and classical assumption in the machine learning research area is "randomness assumption" (also known as i.i.d assumption), which states that data are assumed to be independent and identically generated by some known or unknown distribution. This assumption, which is the foundation of most existing approaches in the literature, simplifies…
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
No Friends but the Mountains: A Simulation on Kurdistan.
ERIC Educational Resources Information Center
Major, Marc R.
1996-01-01
Presents a simulation that focuses on Kurdish nationalism and the struggle for autonomy and independence from the states that rule over Kurdish lands. Students assume the roles of either one of the countries directly involved or the governing body of the United Nations. Includes extensive background material. (MJP)
Testing Factorial Invariance in Multilevel Data: A Monte Carlo Study
ERIC Educational Resources Information Center
Kim, Eun Sook; Kwok, Oi-man; Yoon, Myeongsun
2012-01-01
Testing factorial invariance has recently gained more attention in different social science disciplines. Nevertheless, when examining factorial invariance, it is generally assumed that the observations are independent of each other, which might not be always true. In this study, we examined the impact of testing factorial invariance in multilevel…
When Time Makes a Difference: Addressing Ergodicity and Complexity in Education
ERIC Educational Resources Information Center
Koopmans, Matthijs
2015-01-01
The detection of complexity in behavioral outcomes often requires an estimation of their variability over a prolonged time spectrum to assess processes of stability and transformation. Conventional scholarship typically relies on time-independent measures, "snapshots", to analyze those outcomes, assuming that group means and their…
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Urban Adolescents' Postschool Aspirations and Awareness
ERIC Educational Resources Information Center
Scanlon, David; Saxon, Karyn; Cowell, Molly; Kenny, Maureen E.; Perez-Gualdron, Leyla; Jernigan, Maryam
2008-01-01
The young adult years (approximately the age when one leaves high school to age 23) are pivotal to adult life success. They are the years when adolescents typically assume dramatic increases in responsibility for self-direction in areas such as socialization, independent living, citizenship, employment, education, and mental and physical health.…
Information Technology Benchmarks: A Practical Guide for College and University Presidents
ERIC Educational Resources Information Center
Smallen, David; Leach, Karen
2004-01-01
Information technologies (IT) continue to grow in importance for independent colleges and universities. Increasingly, students assume a digital world, from online application and registration, to course materials, to communicating with classmates and professors. To stay competitive for students as well as to enhance instructional and…
The Importance of Unitization for Familiarity-Based Learning
ERIC Educational Resources Information Center
Parks, Colleen M.; Yonelinas, Andrew P.
2015-01-01
It is often assumed that recollection is necessary to support memory for novel associations, whereas familiarity supports memory for single items. However, the levels of unitization framework assumes that familiarity can support associative memory under conditions in which the components of an association are unitized (i.e., treated as a single…
14 CFR 25.351 - Yaw maneuver conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... angle of paragraph (c) of this section, it is assumed that the cockpit rudder control is suddenly...) With the airplane in unaccelerated flight at zero yaw, it is assumed that the cockpit rudder control is suddenly displaced to achieve the resulting rudder deflection, as limited by: (1) The control system on...
14 CFR 25.351 - Yaw maneuver conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... angle of paragraph (c) of this section, it is assumed that the cockpit rudder control is suddenly...) With the airplane in unaccelerated flight at zero yaw, it is assumed that the cockpit rudder control is suddenly displaced to achieve the resulting rudder deflection, as limited by: (1) The control system on...
African American Teaching and the Matriarchal Performance.
ERIC Educational Resources Information Center
Jeffries, Rhonda Baynes
This paper discusses the role of matriarchs in African-American culture, explaining that traditionally, African-American matriarchs arise from a combination of African norms and American social positions that naturally forces them to assume leadership conditions. The roles these women assume are a response to the desire to survive in a society…
How General is General Strain Theory? Assessing Determinacy and Indeterminacy across Life Domains
ERIC Educational Resources Information Center
De Coster, Stacy; Kort-Butler, Lisa
2006-01-01
This article explores how assumptions of determinacy and indeterminacy apply to general strain theory. Theories assuming determinacy assert that motivational conditions determine specific forms of deviant adaptations, whereas those assuming indeterminacy propose that a given social circumstance can predispose a person toward many forms of…
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
A consensus-based dynamics for market volumes
NASA Astrophysics Data System (ADS)
Sabatelli, Lorenzo; Richmond, Peter
2004-12-01
We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Andrew F.; Marzari, Francesco
Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less
Reliable and accurate extraction of Hamaker constants from surface force measurements.
Miklavcic, S J
2018-08-15
A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.
Quantification is Neither Necessary Nor Sufficient for Measurement
NASA Astrophysics Data System (ADS)
Mari, Luca; Maul, Andrew; Torres Irribarra, David; Wilson, Mark
2013-09-01
Being an infrastructural, widespread activity, measurement is laden with stereotypes. Some of these concern the role of measurement in the relation between quality and quantity. In particular, it is sometimes argued or assumed that quantification is necessary for measurement; it is also sometimes argued or assumed that quantification is sufficient for or synonymous with measurement. To assess the validity of these positions the concepts of measurement and quantitative evaluation should be independently defined and their relationship analyzed. We contend that the defining characteristic of measurement should be the structure of the process, not a feature of its results. Under this perspective, quantitative evaluation is neither sufficient nor necessary for measurement.
NASA Technical Reports Server (NTRS)
Cooper, F. D.
1965-01-01
A method of implementing Saturn V lunar missions from an earth parking orbit is presented. The ground launch window is assumed continuous over a four and one-half hour period. The iterative guidance scheme combined with a set of auxiliary equations that define suitable S-IVB cutoff conditions, is the approach taken. The four inputs to the equations that define cutoff conditions are represented as simple third-degree polynomials as a function of ignition time. Errors at lunar arrival caused by the separate and combined effects of the guidance equations, cutoff conditions, hypersurface errors, and input representations are shown. Vehicle performance variations and parking orbit injection errors are included as perturbations. Appendix I explains how aim vectors were computed for the cutoff equations. Appendix II presents all guidance equations and related implementation procedures. Appendix III gives the derivation of the auxiliary cutoff equations. No error at lunar arrival was large enough to require a midcourse correction greater than one meter per second assuming a transfer time of three days and the midcourse correction occurs five hours after injection. Since this result is insignificant when compared to expected hardware errors, the implementation procedures presented are adequate to define cutoff conditions for Saturn V lunar missions.
Simulation of fatigue crack growth under large scale yielding conditions
NASA Astrophysics Data System (ADS)
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
A random spatial network model based on elementary postulates
Karlinger, Michael R.; Troutman, Brent M.
1989-01-01
A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.
Kyogoku, Daisuke; Sota, Teiji
2017-05-17
Interspecific mating interactions, or reproductive interference, can affect population dynamics, species distribution and abundance. Previous population dynamics models have assumed that the impact of frequency-dependent reproductive interference depends on the relative abundances of species. However, this assumption could be an oversimplification inappropriate for making quantitative predictions. Therefore, a more general model to forecast population dynamics in the presence of reproductive interference is required. Here we developed a population dynamics model to describe the absolute density dependence of reproductive interference, which appears likely when encounter rate between individuals is important. Our model (i) can produce diverse shapes of isoclines depending on parameter values and (ii) predicts weaker reproductive interference when absolute density is low. These novel characteristics can create conditions where coexistence is stable and independent from the initial conditions. We assessed the utility of our model in an empirical study using an experimental pair of seed beetle species, Callosobruchus maculatus and Callosobruchus chinensis. Reproductive interference became stronger with increasing total beetle density even when the frequencies of the two species were kept constant. Our model described the effects of absolute density and showed a better fit to the empirical data than the existing model overall.
Pulsed magnetic field induced fast drug release from magneto liposomes via ultrasound generation.
Podaru, George; Ogden, Saralyn; Baxter, Amanda; Shrestha, Tej; Ren, Shenqiang; Thapa, Prem; Dani, Raj Kumar; Wang, Hongwang; Basel, Matthew T; Prakash, Punit; Bossmann, Stefan H; Chikan, Viktor
2014-10-09
Fast drug delivery is very important to utilize drug molecules that are short-lived under physiological conditions. Techniques that can release model molecules under physiological conditions could play an important role to discover the pharmacokinetics of short-lived substances in the body. Here an experimental method is developed for the fast release of the liposomes' payload without a significant increase in (local) temperatures. This goal is achieved by using short magnetic pulses to disrupt the lipid bilayer of liposomes loaded with magnetic nanoparticles. The drug release has been tested by two independent assays. The first assay relies on the AC impedance measurements of MgSO4 released from the magnetic liposomes. The second standard release assay is based on the increase of the fluorescence signal from 5(6)-carboxyfluorescein dye when the dye is released from the magneto liposomes. The efficiency of drug release ranges from a few percent to up to 40% in the case of the MgSO4. The experiments also indicate that the magnetic nanoparticles generate ultrasound, which is assumed to have a role in the release of the model drugs from the magneto liposomes.
Chatziprodromidou, I P; Apostolou, T
2018-04-01
The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.
14 CFR 25.483 - One-gear landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditions. For the one-gear landing conditions, the airplane is assumed to be in the level attitude and to... this attitude— (a) The ground reactions must be the same as those obtained on that side under § 25.479...
Analysis of Fluid Gauge Sensor for Zero or Microgravity Conditions using Finite Element Method
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Doiron, Terence a.
2007-01-01
In this paper the Finite Element Method (FEM) is presented for mass/volume gauging of a fluid in a tank subjected to zero or microgravity conditions. In this approach first mutual capacitances between electrodes embedded inside the tank are measured. Assuming the medium properties the mutual capacitances are also estimated using FEM approach. Using proper non-linear optimization the assumed properties are updated by minimizing the mean square error between estimated and measured capacitances values. Numerical results are presented to validate the present approach.
Differential approach to strategies of segmental stabilisation in postural control.
Isableu, Brice; Ohlmann, Théophile; Crémieux, Jacques; Amblard, Bernard
2003-05-01
The present paper attempts to clarify the between-subjects variability exhibited in both segmental stabilisation strategies and their subordinated or associated sensory contribution. Previous data have emphasised close relationships between the interindividual variability in both the visual control of posture and the spatial visual perception. In this study, we focused on the possible relationships that might link perceptual visual field dependence-independence and the visual contribution to segmental stabilisation strategies. Visual field dependent (FD) and field independent (FI) subjects were selected on the basis of their extreme score in a static rod and frame test where an estimation of the subjective vertical was required. In the postural test, the subjects stood in the sharpened Romberg position in darkness or under normal or stroboscopic illumination, in front of either a vertical or a tilted frame. Strategies of segmental stabilisation of the head, shoulders and hip in the roll plane were analysed by means of their anchoring index (AI). Our hypothesis was that FD subjects might use mainly visual cues for calibrating not only their spatial perception but also their strategies of segmental stabilisation. In the case of visual cue disturbances, a greater visual dependency to the strategies of segmental stabilisation in FD subjects should be validated by observing more systematic "en bloc" functioning (i.e. negative AI) between two adjacent segments. The main results are the following: 1. Strategies of segmental stabilisation differed between both groups and differences were amplified with the deprivation of either total vision and/or static visual cues. 2. In the absence of total vision and/or static visual cues, FD subjects have shown an increased efficiency of the hip stabilisation in space strategy and an "en bloc" operation of the shoulder-hip unit (whole trunk). The last "en bloc" operation was extended to the whole head-trunk unit in darkness, associated with a hip stabilisation in space. 3. The FI subjects have adopted neither a strategy of segmental stabilisation in space nor on the underlying segment, whatever the body segment considered and the visual condition. Thus, in this group, head, shoulder and hip moved independently from each other during stance control, roughly without taking into account the visual condition. The results, emphasising a differential weighting of sensory input involved in both perceptual and postural control, are discussed in terms of the differential choice and/or ability to select the adequate frame of reference common to both cognitive and motor spatial activities. We assumed that a motor-somesthetics "neglect" or a lack of mastering of these inputs/outputs rather than a mere visual dependence in FD subjects would generate these interindividual differences in both spatial perception and postural balance. This proprioceptive "neglect" is assumed to lead FD subjects to sensory reweighting, whereas proprioceptive dominance would lead FI subjects to a greater ability in selecting the adequate frame of reference in the case of intersensory disturbances. Finally, this study also provides evidence for a new interpretation of the visual field dependence-independence dimension in both spatial perception and postural control.
14 CFR 29.483 - One-wheel landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false One-wheel landing conditions. 29.483 Section 29.483 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... One-wheel landing conditions. For the one-wheel landing condition, the rotorcraft is assumed to be in...
14 CFR 27.483 - One-wheel landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false One-wheel landing conditions. 27.483 Section 27.483 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... One-wheel landing conditions. For the one-wheel landing condition, the rotorcraft is assumed to be in...
14 CFR 27.483 - One-wheel landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false One-wheel landing conditions. 27.483 Section 27.483 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... One-wheel landing conditions. For the one-wheel landing condition, the rotorcraft is assumed to be in...
14 CFR 29.483 - One-wheel landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false One-wheel landing conditions. 29.483 Section 29.483 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... One-wheel landing conditions. For the one-wheel landing condition, the rotorcraft is assumed to be in...
Independent events in elementary probability theory
NASA Astrophysics Data System (ADS)
Csenki, Attila
2011-07-01
In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated):
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Zairian Political Conditions and Prospects for Economic Development,
1984-08-15
political conditions and potential for long-term economic development. Two different approaches to the question of political stability and economic...development are presented. Each assumes that political stability is a necessary, though not sufficient, condition for economic development and that, therefore
14 CFR 23.441 - Maneuvering loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditions. In computing the loads, the yawing velocity may be assumed to be zero: (1) With the airplane in unaccelerated flight at zero yaw, it is assumed that the rudder control is suddenly displaced to the maximum... attainable steady state sideslip angle, with the rudder at maximum deflection caused by any one of the...
14 CFR 25.489 - Ground handling conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Ground handling conditions. 25.489 Section... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Structure Ground Loads § 25.489 Ground handling conditions... ground handling conditions). No wing lift may be considered. The shock absorbers and tires may be assumed...
14 CFR 25.489 - Ground handling conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Ground handling conditions. 25.489 Section... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Structure Ground Loads § 25.489 Ground handling conditions... ground handling conditions). No wing lift may be considered. The shock absorbers and tires may be assumed...
Subjective Age Bias: A Motivational and Information Processing Approach
ERIC Educational Resources Information Center
Teuscher, Ursina
2009-01-01
There is broad empirical evidence, but still a lack of theoretical explanations, for the phenomenon that most older people feel considerably younger than their real age. In this article, a measurement model of subjective age was assessed, and two independent theoretical approaches are proposed: (1) a motivational approach assuming that the age…
The Abstract Selection Task: New Data and an Almost Comprehensive Model
ERIC Educational Resources Information Center
Klauer, Karl Christoph; Stahl, Christoph; Erdfelder, Edgar
2007-01-01
A complete quantitative account of P. Wason's (1966) abstract selection task is proposed. The account takes the form of a mathematical model. It is assumed that some response patterns are caused by inferential reasoning, whereas other responses reflect cognitive processes that affect each card selection separately and independently of other card…
Modernity's "Other" and the Transformation of the University
ERIC Educational Resources Information Center
Richards, Howard
2015-01-01
In a dehumanized world in which meanings derived from dominant liberal world views are tacitly assumed to exist objectively and to impose themselves on discourses and on minds quite independently of who expresses them, this paper endorses what Immanuel Wallerstein calls "unthinking social science" and then rethinking social science in…
An Assessment of Decision-Making Processes in Dual-Career Marriages.
ERIC Educational Resources Information Center
Kingsbury, Nancy M.
As large numbers of women enter the labor force, decision making and power processes have assumed greater importance in marital relationships. A sample of 51 (N=101) dual-career couples were interviewed to assess independent variables predictive of process power, process outcome, and subjective outcomes of decision making in dual-career families.…
BackgroundAllergic sensitization to fungi has been associated with asthma severity. As a result, it has been largely assumed that the contribution of fungi to allergic disease is mediated through their potent antigenicity.ObjectiveWe sought to determine the mechanism by which fun...
Bluetooth-Assisted Context-Awareness in Educational Data Networks
ERIC Educational Resources Information Center
Gonzalez-Castano, F. J.; Garcia-Reinoso, J.; Gil-Castineira, F.; Costa-Montenegro, E.; Pousada-Carballo, J. M.
2005-01-01
In this paper, we propose an auxiliary "location network", to support user-independent context-awareness in educational data networks; for example, to help visitors in a museum. We assume that, in such scenarios, there exist "service servers" that need to be aware of user location in real-time. Specifically, we propose the implementation of a…
Brain Networks Associated with Sublexical Properties of Chinese Characters
ERIC Educational Resources Information Center
Yang, Jianfeng; Wang, Xiaojuan; Shu, Hua; Zevin, Jason D.
2011-01-01
Cognitive models of reading all assume some division of labor among processing pathways in mapping among print, sound and meaning. Many studies of the neural basis of reading have used task manipulations such as rhyme or synonym judgment to tap these processes independently. Here we take advantage of specific properties of the Chinese writing…
Bayesian Analysis of Multilevel Probit Models for Data with Friendship Dependencies
ERIC Educational Resources Information Center
Koskinen, Johan; Stenberg, Sten-Ake
2012-01-01
When studying educational aspirations of adolescents, it is unrealistic to assume that the aspirations of pupils are independent of those of their friends. Considerable attention has also been given to the study of peer influence in the educational and behavioral literature. Typically, in empirical studies, the friendship networks have either been…
Research in Secondary Schools. Advances in Learning and Behavioral Disabilities. Volume 17
ERIC Educational Resources Information Center
Scruggs, Thomas E., Ed.; Mastropieri, Margo A., Ed.
2004-01-01
Secondary education of students with learning and behavioral disabilities is an issue of great importance. Unlike elementary schools, secondary schools require substantially more independent functioning, assume the effective use of student planning and study skills, and often lack the classes in basic skills needed by some learners. Further, new…
Feature Sampling in Detection: Implications for the Measurement of Perceptual Independence
ERIC Educational Resources Information Center
Macho, Siegfried
2007-01-01
The article presents the feature sampling signal detection (FS-SDT) model, an extension of the multivariate signal detection (SDT) model. The FS-SDT model assumes that, because of attentional shifts, different subsets of features are sampled for different presentations of the same multidimensional stimulus. Contrary to the SDT model, the FS-SDT…
Robust Optimum Invariant Tests for Random MANOVA Models.
1986-10-01
are assumed to be independent normal with zero mean and dispersion o2 and o72 respectively, Roy and Gnanadesikan (1959) considered the prob- 2 2 lem of...Part II: The multivariate case. Ann. Math. Statist. 31, 939-968. [7] Roy, S.N. and Gnanadesikan , R. (1959). Some contributions to ANOVA in one or more
Visuospatial Orientational Shifts: Evidence for Three Independent Mechanisms.
ERIC Educational Resources Information Center
Noonan, Michael; Axelrod, Seymour
While it is often assumed that a single mechanism underlies varied experimental evidences of selectivity, Berlyne (1969) suggested that attention-like selectivity may take place in a number of quite separate neural systems. This study examined the issue of visuospatial attention by investigating covert orientation or "looking out of the corner of…
A Two-Level Confirmatory Factor Analysis of a Modified Rosenberg Self-Esteem Scale
ERIC Educational Resources Information Center
Zimprich, Daniel; Perren, Sonja; Hornung, Rainer
2005-01-01
Classical factor analysis assumes independent and identically distributed observations. Educational data, however, are often hierarchically structured, with, for example, students being nested within classes. In this study, data on self-esteem gathered in a sample of 1,107 students within 72 school classes in Switzerland were analyzed using…
False fame prevented: avoiding fluency effects without judgmental correction.
Topolinski, Sascha; Strack, Fritz
2010-05-01
Three studies show a way to prevent fluency effects independently of judgmental correction strategies by identifying and procedurally blocking the sources of fluency variations, which are assumed to be embodied in nature. For verbal stimuli, covert pronunciations are assumed to be the crucial source of fluency gains. As a consequence, blocking such pronunciation simulations through a secondary oral motor task decreased the false-fame effect for repeatedly presented names of actors (Experiment 1) as well as prevented increases in trust due to repetition for brand names and names of shares in the stock market (Experiment 2). Extending this evidence beyond repeated exposure, we demonstrated that blocking oral motor simulations also prevented fluency effects of word pronunciation on judgments of hazardousness (Experiment 3). Concerning the realm of judgment correction, this procedural blocking of (biasing) associative processes is a decontamination method not considered before in the literature, because it is independent of exposure control, mood, motivation, and post hoc correction strategies. The present results also have implications for applied issues, such as advertising and investment decisions. 2010 APA, all rights reserved
NASA Technical Reports Server (NTRS)
Reddy, J. N.
1986-01-01
An improved plate theory that accounts for the transverse shear deformation is presented, and mixed and displacement finite element models of the theory are developed. The theory is based on an assumed displacement field in which the inplane displacements are expanded in terms of the thickness coordinate up to the cubic term and the transverse deflection is assumed to be independent of the thickness coordinate. The governing equations of motion for the theory are derived from the Hamilton's principle. The theory eliminates the need for shear correction factors because the transverse shear stresses are represented parabolically. A mixed finite element model that uses independent approximations of the displacements and moments, and a displacement model that uses only displacements as degrees of freedom are developed. A comparison of the numerical results for bending with the exact solutions of the new theory and the three-dimensional elasticity theory shows that the present theory (and hence the finite element models) is more accurate than other plate-theories of the same order.
14 CFR 25.331 - Symmetric maneuvering conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Maneuvering balanced conditions. Assuming the airplane to be in equilibrium with zero pitching acceleration..., based on a rational pitching control motion vs. time profile, must be established in which the design...
Ma, Ning; Yu, Angela J
2016-01-01
Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Naimi, Ashley I
2015-07-15
Epidemiologists are increasingly using natural effects for applied mediation analyses, yet 1 key identifying assumption is unintuitive and subject to some controversy. In this issue of the Journal, Jiang and VanderWeele (Am J Epidemiol. 2015;182(2):105-108) formalize the conditions under which the difference method can be used to estimate natural indirect effects. In this commentary, I discuss implications of the controversial "cross-worlds" independence assumption needed to identify natural effects. I argue that with a binary mediator, a simple modification of the authors' approach will provide bounds for natural direct and indirect effect estimates that better reflect the capacity of the available data to support empirical statements on the presence of mediated effects. I discuss complications encountered when odds ratios are used to decompose effects, as well as the implications of incorrectly assuming the absence of exposure-induced mediator-outcome confounders. I note that the former problem can be entirely resolved using collapsible measures of effect, such as risk ratios. In the Appendix, I use previous derivations for natural direct effect bounds on the risk difference scale to provide bounds on the odds ratio scale that accommodate 1) uncertainty due to the cross-world independence assumption and 2) uncertainty due to the cross-world independence assumption and the presence of exposure-induced mediator-outcome confounders. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
46 CFR 134.140 - Structural standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
...”, assuming a steady wind speed of 100 knots for liftboats in unrestricted service, and 70 knots for liftboats in restricted service under normal operating conditions and 100 knots under severe storm conditions...
Richter, S. Helene; Garner, Joseph P.; Zipser, Benjamin; Lewejohann, Lars; Sachser, Norbert; Touma, Chadi; Schindler, Britta; Chourbaji, Sabine; Brandwein, Christiane; Gass, Peter; van Stipdonk, Niek; van der Harst, Johanneke; Spruijt, Berry; Võikar, Vootele; Wolfer, David P.; Würbel, Hanno
2011-01-01
In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies. PMID:21305027
Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee
2007-01-01
False killer whale Pseudorca crassidens auditory brainstem responses (ABR) were recorded using a double-click stimulation paradigm specifically measuring the recovery of the second response (to the test click) as a function of the inter-click interval (ICI) at various levels of the conditioning and test click. At all click intensities, the slopes of recovery functions were almost constant: 0.6-0.8 microV per ICI decade. Therefore, even when the conditioning-to-test-click level ratio was kept constant, the duration of recovery was intensity-dependent: The higher intensity the longer the recovery. The conditioning-to-test-click level ratio strongly influenced the recovery time: The higher the ratio, the longer the recovery. The dependence was almost linear using a logarithmic ICI scale with a rate of 25-30 dB per ICI decade. These data were used for modeling the interaction between the emitted click and the echo during echolocation, assuming that the two clicks simulated the transmitted and echo clicks. This simulation showed that partial masking of the echo by the preceding emitted click may explain the independence of echo-response amplitude of target distance. However, the distance range where this mechanism is effective depends on the emitted click level: The higher the level, the greater the range. @ 2007 Acoustical Society of America.
NASA Technical Reports Server (NTRS)
Weatherill, W. H.; Ehlers, F. E.; Yip, E.; Sebastian, J. D.
1980-01-01
Analytical and empirical studies of a finite difference method for the solution of the transonic flow about harmonically oscillating wings and airfoils are presented. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady equations for small disturbances. The steady velocity potential is obtained first from the well-known nonlinear equation for steady transonic flow. The unsteady velocity potential is then obtained from a linear differential equation in complex form with spatially varying coefficients. Since sinusoidal motion is assumed, the unsteady equation is independent of time. An out-of-core direct solution procedure was developed and applied to two-dimensional sections. Results are presented for a section of vanishing thickness in subsonic flow and an NACA 64A006 airfoil in supersonic flow. Good correlation is obtained in the first case at values of Mach number and reduced frequency of direct interest in flutter analyses. Reasonable results are obtained in the second case. Comparisons of two-dimensional finite difference solutions with exact analytic solutions indicate that the accuracy of the difference solution is dependent on the boundary conditions used on the outer boundaries. Homogeneous boundary conditions on the mesh edges that yield complex eigenvalues give the most accurate finite difference solutions. The plane outgoing wave boundary conditions meet these requirements.
When Does Frequency-Independent Selection Maintain Genetic Variation?
Novak, Sebastian; Barton, Nicholas H
2017-10-01
Frequency-independent selection is generally considered as a force that acts to reduce the genetic variation in evolving populations, yet rigorous arguments for this idea are scarce. When selection fluctuates in time, it is unclear whether frequency-independent selection may maintain genetic polymorphism without invoking additional mechanisms. We show that constant frequency-independent selection with arbitrary epistasis on a well-mixed haploid population eliminates genetic variation if we assume linkage equilibrium between alleles. To this end, we introduce the notion of frequency-independent selection at the level of alleles, which is sufficient to prove our claim and contains the notion of frequency-independent selection on haploids. When selection and recombination are weak but of the same order, there may be strong linkage disequilibrium; numerical calculations show that stable equilibria are highly unlikely. Using the example of a diallelic two-locus model, we then demonstrate that frequency-independent selection that fluctuates in time can maintain stable polymorphism if linkage disequilibrium changes its sign periodically. We put our findings in the context of results from the existing literature and point out those scenarios in which the possible role of frequency-independent selection in maintaining genetic variation remains unclear. Copyright © 2017 by the Genetics Society of America.
14 CFR 25.481 - Tail-down landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... landing conditions. (a) In the tail-down attitude, the airplane is assumed to contact the ground at... an attitude corresponding to either the stalling angle or the maximum angle allowing clearance with...
Does proactive interference play a significant role in visual working memory tasks?
Makovski, Tal
2016-10-01
Visual working memory (VWM) is an online memory buffer that is typically assumed to be immune to source memory confusions. Accordingly, the few studies that have investigated the role of proactive interference (PI) in VWM tasks found only a modest PI effect at best. In contrast, a recent study has found a substantial PI effect in that performance in a VWM task was markedly improved when all memory items were unique compared to the more standard condition in which only a limited set of objects was used. The goal of the present study was to reconcile this discrepancy between the findings, and to scrutinize the extent to which PI is involved in VWM tasks. Experiments 1-2 showed that the robust advantage in using unique memory items can also be found in a within-subject design and is largely independent of set size, encoding duration, or intertrial interval. Importantly, however, PI was found mainly when all items were presented at the same location, and the effect was greatly diminished when the items were presented, either simultaneously (Experiment 3) or sequentially (Experiments 4-5), at distinct locations. These results indicate that PI is spatially specific and that without the assistance of spatial information VWM is not protected from PI. Thus, these findings imply that spatial information plays a key role in VWM, and underscore the notion that VWM is more vulnerable to interference than is typically assumed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Caregiving, perceptions of maternal favoritism, and tension among siblings.
Suitor, J Jill; Gilligan, Megan; Johnson, Kaitlin; Pillemer, Karl
2014-08-01
Studies of later-life families have revealed that sibling tension often increases in response to parents' need for care. Both theory and research on within-family differences suggest that when parents' health declines, sibling relations may be affected by which children assume care and whether siblings perceive that the parent favors some offspring over others. In the present study, we explore the ways in which these factors shape sibling tension both independently and in combination during caregiving. In this article, we use data collected from 450 adult children nested within 214 later-life families in which the offspring reported that their mothers needed care within 2 years prior to the interview. Multilevel analyses demonstrated that providing care and perceiving favoritism regarding future caregiving were associated with sibling tension following mothers' major health events. Further, the effects of caregiving on sibling tension were greater when perceptions of favoritism were also present. These findings shed new light on the conditions under which adult children are likely to experience high levels of sibling tension during caregiving. Understanding these processes is important because siblings are typically the individuals to whom caregivers are most likely to turn for support when assuming care of older parents, yet these relationships are often a major source of interpersonal stress. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Tanaka, Hiroyoshi
Under the basic tenet that syntactic derivation offers an optimal solution to both phonological realization and semantic interpretation of linguistic expression, the recent minimalist framework of syntactic theory claims that the basic unit for the derivation is equivalent to a syntactic propositional element, which is called a phase. In this analysis, syntactic derivation is assumed to proceed at phasal projections that include Complementizer Phrases (CP). However, there have been pointed out some empirical problems with respect to the failure of multiple occurrences of discourse-related elements in the CP domain. This problem can be easily overcome if the alternative approach in the recent minimalist perspective, which is called Cartographic CP analysis, is adopted, but this may raise a theoretical issue about the tension between phasality and four kinds of functional projections assumed in this analysis (Force Phrase (ForceP), Finite Phrase (FinP), Topic Phrase (TopP) and Focus Phrase (FocP)). This paper argues that a hybrid analysis with these two influential approaches can be proposed by claiming a reasonable assumption that syntactically requisite projections (i.e., ForceP and FinP) are phases and independently constitute a phasehood with relevant heads in the derivation. This then enables us to capture various syntactic properties of the Topicalization construction in English. Our proposed analysis, coupled with some additional assumptions and observations in recent minimalist studies, can be extended to incorporate peculiar properties in temporal/conditional adverbials and imperatives.
NASA Astrophysics Data System (ADS)
Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik
2016-04-01
Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.
2012-01-01
We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis. Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80. PMID:22657695
What dynamics can be expected for mixed states in two-slit experiments?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo; Sanz, Ángel S., E-mail: asanz@iff.csic.es
2015-06-15
Weak-measurement-based experiments (Kocsis et al., 2011) have shown that, at least for pure states, the average evolution of independent photons in Young’s two-slit experiment is in compliance with the trajectories prescribed by the Bohmian formulation of quantum mechanics. But, what happens if the same experiment is repeated assuming that the wave function associated with each particle is different, i.e., in the case of mixed (incoherent) states? This question is investigated here by means of two alternative numerical simulations of Young’s experiment, purposely devised to be easily implemented and tested in the laboratory. Contrary to what could be expected a priori, itmore » is found that even for conditions of maximal mixedness or incoherence (total lack of interference fringes), experimental data will render a puzzling and challenging outcome: the average particle trajectories will still display features analogous to those for pure states, i.e., independently of how mixedness arises, the associated dynamics is influenced by both slits at the same time. Physically this simply means that weak measurements are not able to discriminate how mixedness arises in the experiment, since they only provide information about the averaged system dynamics. - Highlights: • The dynamics associated with mixture states in investigated by means of two simple Young’s two-slit models. • The models are prepared to be easily implemented and tested in the laboratory by means of weak measurements. • Bohmian mechanics has been generalized to encompass statistical mixtures. • Even for conditions of maximal mixedness numerical simulations show that the dynamics is strongly influenced by both slits. • Accordingly, weak measurements are unable to discriminate how mixedness arises in an experiment.« less
Papadatos, George; Alkarouri, Muhammad; Gillet, Valerie J; Willett, Peter; Kadirkamanathan, Visakan; Luscombe, Christopher N; Bravi, Gianpaolo; Richmond, Nicola J; Pickett, Stephen D; Hussain, Jameed; Pritchard, John M; Cooper, Anthony W J; Macdonald, Simon J F
2010-10-25
Previous studies of the analysis of molecular matched pairs (MMPs) have often assumed that the effect of a substructural transformation on a molecular property is independent of the context (i.e., the local structural environment in which that transformation occurs). Experiments with large sets of hERG, solubility, and lipophilicity data demonstrate that the inclusion of contextual information can enhance the predictive power of MMP analyses, with significant trends (both positive and negative) being identified that are not apparent when using conventional, context-independent approaches.
Calibration of a universal indicated turbulence system
NASA Technical Reports Server (NTRS)
Chapin, W. G.
1977-01-01
Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.
Psychological reactions to continuous ambulatory peritoneal dialysis.
Geiser, M T; Van Dyke, C; East, R; Weiner, M
The first twenty patients who entered our continuous ambulatory peritoneal dialysis (CAPD) program from March, 1979 to February, 1981 were interviewed to assess their psychological reactions to CAPD. Six patients were successfully maintained on CAPD for more than one year. CAPD provided patients with a greater sense of well-being, strength, and independence. This independence required adherence to a strict schedule of exchanges. Reactions to the loss of CAPD followed the pattern of a grief reaction. Those patients who were self-disciplined and comfortable assuming active control of their health care proved to be the best candidates for CAPD.
Pressure independence of granular flow through an aperture.
Aguirre, M A; Grande, J G; Calvo, A; Pugnaloni, L A; Géminard, J-C
2010-06-11
We experimentally demonstrate that the flow rate of granular material through an aperture is controlled by the exit velocity imposed on the particles and not by the pressure at the base, contrary to what is often assumed in previous work. This result is achieved by studying the discharge process of a dense packing of monosized disks through an orifice. The flow is driven by a conveyor belt. This two-dimensional horizontal setup allows us to independently control the velocity at which the disks escape the horizontal silo and the pressure in the vicinity of the aperture. The flow rate is found to be proportional to the belt velocity, independent of the amount of disks in the container and, thus, independent of the pressure in the outlet region. In addition, this specific configuration makes it possible to get information on the system dynamics from a single image of the disks that rest on the conveyor belt after the discharge.
Conditional Covariance Theory and Detect for Polytomous Items
ERIC Educational Resources Information Center
Zhang, Jinming
2007-01-01
This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…
14 CFR 23.485 - Side load conditions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Side load conditions. 23.485 Section 23.485... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Ground Loads § 23.485 Side load conditions. (a) For the side load condition, the airplane is assumed to be in a level attitude...
14 CFR 23.485 - Side load conditions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Side load conditions. 23.485 Section 23.485... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Ground Loads § 23.485 Side load conditions. (a) For the side load condition, the airplane is assumed to be in a level attitude...
14 CFR 23.485 - Side load conditions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Side load conditions. 23.485 Section 23.485... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Ground Loads § 23.485 Side load conditions. (a) For the side load condition, the airplane is assumed to be in a level attitude...
14 CFR 25.483 - One-gear landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false One-gear landing conditions. 25.483 Section... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Structure Ground Loads § 25.483 One-gear landing conditions. For the one-gear landing conditions, the airplane is assumed to be in the level attitude and to...
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Mei, Chuh
1988-01-01
The large deflection random response of symmetrically laminated cross-ply rectangular thin plates subjected to random excitation is studied. The out-of-plane boundary conditions are such that all the edges are rigidly supported against translation, but elastically restrained against rotation. The plate is also assumed to have a small initial imperfection. The assumed membrane boundary conditions are such that all the edges are free from normal and tangential forces in the plane of the plate. Mean-square deflections and mean-square strains are determined for a three-layered cross-ply laminate.
Ellis, John; Evans, Jason L.; Nagata, Natsumi; ...
2017-04-12
We reconsider the minimal SU( 5) grand unified theory (GUT) in the context of no-scale supergravity inspired by string compactification scenarios, assuming that the soft supersymmetry-breaking parameters satisfy universality conditions at some input scale M in above the GUT scale M GUT. When setting up such a no-scale super-GUT model, special attention must be paid to avoiding the Scylla of rapid proton decay and the Charybdis of an excessive density of cold dark matter, while also having an acceptable mass for the Higgs boson. Furthermore, we do not find consistent solutions if none of the matter and Higgs fields aremore » assigned to twisted chiral supermultiplets, even in the presence of Giudice–Masiero terms. But, consistent solutions may be found if at least one fiveplet of GUT Higgs fields is assigned to a twisted chiral supermultiplet, with a suitable choice of modular weights. Spin-independent dark matter scattering may be detectable in some of these consistent solutions.« less
Normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2015-01-01
The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.
Diffuse-interface model for rapid phase transformations in nonequilibrium systems.
Galenko, Peter; Jou, David
2005-04-01
A thermodynamic approach to rapid phase transformations within a diffuse interface in a binary system is developed. Assuming an extended set of independent thermodynamic variables formed by the union of the classic set of slow variables and the space of fast variables, we introduce finiteness of the heat and solute diffusive propagation at the finite speed of the interface advancing. To describe transformations within the diffuse interface, we use the phase-field model which allows us to follow steep but smooth changes of phase within the width of the diffuse interface. Governing equations of the phase-field model are derived for the hyperbolic model, a model with memory, and a model of nonlinear evolution of transformation within the diffuse interface. The consistency of the model is proved by the verification of the validity of the condition of positive entropy production and by outcomes of the fluctuation-dissipation theorem. A comparison with existing sharp-interface and diffuse-interface versions of the model is given.
General equilibrium characteristics of a dual-lift helicopter system
NASA Technical Reports Server (NTRS)
Cicolani, L. S.; Kanning, G.
1986-01-01
The equilibrium characteristics of a dual-lift helicopter system are examined. The system consists of the cargo attached by cables to the endpoints of a spreader bar which is suspended by cables below two helicopters. Results are given for the orientation angles of the suspension system and its internal forces, and for the helicopter thrust vector requirements under general circumstances, including nonidentical helicopters, any accelerating or static equilibrium reference flight condition, any system heading relative to the flight direction, and any distribution of the load to the two helicopters. Optimum tether angles which minimize the sum of the required thrust magnitudes are also determined. The analysis does not consider the attitude degrees of freedom of the load and helicopters in detail, but assumes that these bodies are stable, and that their aerodynamic forces in equilibrium flight can be determined independently as functions of the reference trajectory. The ranges of these forces for sample helicopters and loads are examined and their effects on the equilibrium characteristics are given parametrically in the results.
Additional extensions to the NASCAP computer code, volume 3
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Cooke, D. L.
1981-01-01
The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.
An analysis of FtsZ assembly using small angle X-ray scattering and electron microscopy.
Kuchibhatla, Anuradha; Abdul Rasheed, A S; Narayanan, Janaky; Bellare, Jayesh; Panda, Dulal
2009-04-09
Small angle X-ray scattering (SAXS) was used for the first time to study the self-assembly of the bacterial cell division protein, FtsZ, with three different additives: calcium chloride, monosodium glutamate and DEAE-dextran hydrochloride in solution. The SAXS data were analyzed assuming a model form factor and also by a model-independent analysis using the pair distance distribution function. Transmission electron microscopy (TEM) was used for direct observation of the FtsZ filaments. By sectioning and negative staining with glow discharged grids, very high bundling as well as low bundling polymers were observed under different assembly conditions. FtsZ polymers formed different structures in the presence of different additives and these additives were found to increase the bundling of FtsZ protofilaments by different mechanisms. The combined use of SAXS and TEM provided us a significant insight of the assembly of FtsZ and microstructures of the assembled FtsZ polymers.
Control Augmented Structural Synthesis
NASA Technical Reports Server (NTRS)
Lust, Robert V.; Schmit, Lucien A.
1988-01-01
A methodology for control augmented structural synthesis is proposed for a class of structures which can be modeled as an assemblage of frame and/or truss elements. It is assumed that both the plant (structure) and the active control system dynamics can be adequately represented with a linear model. The structural sizing variables, active control system feedback gains and nonstructural lumped masses are treated simultaneously as independent design variables. Design constraints are imposed on static and dynamic displacements, static stresses, actuator forces and natural frequencies to ensure acceptable system behavior. Multiple static and dynamic loading conditions are considered. Side constraints imposed on the design variables protect against the generation of unrealizable designs. While the proposed approach is fundamentally more general, here the methodology is developed and demonstrated for the case where: (1) the dynamic loading is harmonic and thus the steady state response is of primary interest; (2) direct output feedback is used for the control system model; and (3) the actuators and sensors are collocated.
Mantle downwelling and crustal convergence - A model for Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Kiefer, Walter S.; Hager, Bradford H.
1991-01-01
Models of viscous crustal flow driven by gradients in topography are presented in order to explore quantitatively the implications of the hypothesis that Ishtar is a crustal convergence zone overlying a downwelling mantle. Assuming a free-slip surface boundary condition, it is found that, if the crustal convergence hypothesis is correct, then the crustal thickness in the plains surrounding Ishtar can be no more than about 25 km thick. If the geothermal gradient is larger or the rheology is weaker, the crust must be even thinner for net crustal convergence to be possible. This upper bound is in good agreement with the several independent estimates of crustal thickness of 15-30 km in the plains of Venus based on modeling of the spacing of tectonic features and of impact crater relaxation. Although Ishtar is treated as a crustal convergence zone, this crustal flow model shows that under some circumstances, near-surface material may actually flow away from Ishtar, providing a possible explanation for the grabenlike structures in Fortuna Tessera.
Dynamo magnetic field modes in thin astrophysical disks - An adiabatic computational approximation
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Levy, E. H.
1991-01-01
An adiabatic approximation is applied to the calculation of turbulent MHD dynamo magnetic fields in thin disks. The adiabatic method is employed to investigate conditions under which magnetic fields generated by disk dynamos permeate the entire disk or are localized to restricted regions of a disk. Two specific cases of Keplerian disks are considered. In the first, magnetic field diffusion is assumed to be dominated by turbulent mixing leading to a dynamo number independent of distance from the center of the disk. In the second, the dynamo number is allowed to vary with distance from the disk's center. Localization of dynamo magnetic field structures is found to be a general feature of disk dynamos, except in the special case of stationary modes in dynamos with constant dynamo number. The implications for the dynamical behavior of dynamo magnetized accretion disks are discussed and the results of these exploratory calculations are examined in the context of the protosolar nebula and accretion disks around compact objects.
Baker, R.J.; Baehr, A.L.; Lahvis, M.A.
2000-01-01
An open microcosm method for quantifying microbial respiration and estimating biodegradation rates of hydrocarbons in gasoline-contaminated sediment samples has been developed and validated. Stainless-steel bioreactors are filled with soil or sediment samples, and the vapor-phase composition (concentrations of oxygen (O2), nitrogen (N2), carbon dioxide (CO2), and selected hydrocarbons) is monitored over time. Replacement gas is added as the vapor sample is taken, and selection of the replacement gas composition facilitates real-time decision-making regarding environmental conditions within the bioreactor. This capability allows for maintenance of field conditions over time, which is not possible in closed microcosms. Reaction rates of CO2 and O2 are calculated from the vapor-phase composition time series. Rates of hydrocarbon biodegradation are either measured directly from the hydrocarbon mass balance, or estimated from CO2 and O2 reaction rates and assumed reaction stoichiometries. Open microcosm experiments using sediments spiked with toluene and p-xylene were conducted to validate the stoichiometric assumptions. Respiration rates calculated from O2 consumption and from CO2 production provide estimates of toluene and p- xylene degradation rates within about ??50% of measured values when complete mineralization stoichiometry is assumed. Measured values ranged from 851.1 to 965.1 g m-3 year-1 for toluene, and 407.2-942.3 g m-3 year-1 for p- xylene. Contaminated sediment samples from a gasoline-spill site were used in a second set of microcosm experiments. Here, reaction rates of O2 and CO2 were measured and used to estimate hydrocarbon respiration rates. Total hydrocarbon reaction rates ranged from 49.0 g m-3 year-1 in uncontaminated (background) to 1040.4 g m-3 year-1 for highly contaminated sediment, based on CO2 production data. These rate estimates were similar to those obtained independently from in situ CO2 vertical gradient and flux determinations at the field site. In these experiments, aerobic conditions were maintained in the microcosms by using air as the replacement gas, thus preserving the ambient aerobic environment of the subsurface near the capillary zone. This would not be possible with closed microcosms.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
The double burden of undernutrition and excess body weight in Mexico.
Kroker-Lobos, Maria F; Pedroza-Tobías, Andrea; Pedraza, Lilia S; Rivera, Juan A
2014-12-01
In Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups. The objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels. We estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition. At the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected. Although some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions. © 2014 American Society for Nutrition.
14 CFR 29.481 - Tail-down landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Tail-down landing conditions. 29.481 Section 29.481 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Tail-down landing conditions. (a) The rotorcraft is assumed to be in the maximum nose-up attitude...
14 CFR 27.481 - Tail-down landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Tail-down landing conditions. 27.481 Section 27.481 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Tail-down landing conditions. (a) The rotorcraft is assumed to be in the maximum nose-up attitude...
Fujita-Sato, Saori; Galeas, Jacqueline; Truitt, Morgan; Pitt, Cameron; Urisman, Anatoly; Bandyopadhyay, Sourav; Ruggero, Davide; McCormick, Frank
2015-07-15
Oncogenic K-Ras mutation occurs frequently in several types of cancers, including pancreatic and lung cancers. Tumors with K-Ras mutation are resistant to chemotherapeutic drugs as well as molecular targeting agents. Although numerous approaches are ongoing to find effective ways to treat these tumors, there are still no effective therapies for K-Ras mutant cancer patients. Here we report that K-Ras mutant cancers are more dependent on K-Ras in anchorage-independent culture conditions than in monolayer culture conditions. In seeking to determine mechanisms that contribute to the K-Ras dependency in anchorage-independent culture conditions, we discovered the involvement of Met in K-Ras-dependent, anchorage-independent cell growth. The Met signaling pathway is enhanced and plays an indispensable role in anchorage-independent growth even in cells in which Met is not amplified. Indeed, Met expression is elevated under anchorage-independent growth conditions and is regulated by K-Ras in a MAPK/ERK kinase (MEK)-dependent manner. Remarkably, in spite of a global downregulation of mRNA translation during anchorage-independent growth, we find that Met mRNA translation is specifically enhanced under these conditions. Importantly, ectopic expression of an active Met mutant rescues K-Ras ablation-derived growth suppression, indicating that K-Ras-mediated Met expression drives "K-Ras addiction" in anchorage-independent conditions. Our results indicate that enhanced Met expression and signaling is essential for anchorage-independent growth of K-Ras mutant cancer cells and suggests that pharmacological inhibitors of Met could be effective for K-Ras mutant tumor patients. ©2015 American Association for Cancer Research.
Fujita-Sato, Saori; Galeas, Jacqueline; Truitt, Morgan; Pitt, Cameron; Urisman, Anatoly; Bandyopadhyay, Sourav; Ruggero, Davide; McCormick, Frank
2015-01-01
Oncogenic K-Ras mutation occurs frequently in several types of cancers including pancreatic and lung cancers. Tumors with K-Ras mutation are resistant to chemotherapeutic drugs as well as molecular targeting agents. Although numerous approaches are ongoing to find effective ways to treat these tumors, there are still no effective therapies for K-Ras mutant cancer patients. Here we report that K-Ras mutant cancers are more dependent on K-Ras in anchorage independent culture conditions than in monolayer culture conditions. In seeking to determine mechanisms that contribute to the K-Ras dependency in anchorage independent culture conditions, we discovered the involvement of Met in K-Ras-dependent, anchorage independent cell growth. The Met signaling pathway is enhanced and plays an indispensable role in anchorage independent growth even in cells in which Met is not amplified. Indeed, Met expression is elevated under anchorage-independent growth conditions and is regulated by K-Ras in a MAPK/ERK kinase (MEK)-dependent manner. Remarkably, in spite of a global down-regulation of mRNA translation during anchorage independent growth, we find that Met mRNA translation is specifically enhanced under these conditions. Importantly, ectopic expression of an active Met mutant rescues K-Ras ablation-derived growth suppression, indicating that K-Ras mediated Met expression drives “K-Ras addiction” in anchorage independent conditions. Our results indicate that enhanced Met expression and signaling is essential for anchorage independent growth of K-Ras mutant cancer cells and suggests that pharmacological inhibitors of Met could be effective for K-Ras mutant tumor patients. PMID:25977330
Modelling the Stoichiometric Regulation of C-Rich Toxins in Marine Dinoflagellates.
Pinna, Adriano; Pezzolesi, Laura; Pistocchi, Rossella; Vanucci, Silvana; Ciavatta, Stefano; Polimene, Luca
2015-01-01
Toxin production in marine microalgae was previously shown to be tightly coupled with cellular stoichiometry. The highest values of cellular toxin are in fact mainly associated with a high carbon to nutrient cellular ratio. In particular, the cellular accumulation of C-rich toxins (i.e., with C:N > 6.6) can be stimulated by both N and P deficiency. Dinoflagellates are the main producers of C-rich toxins and may represent a serious threat for human health and the marine ecosystem. As such, the development of a numerical model able to predict how toxin production is stimulated by nutrient supply/deficiency is of primary utility for both scientific and management purposes. In this work we have developed a mechanistic model describing the stoichiometric regulation of C-rich toxins in marine dinoflagellates. To this purpose, a new formulation describing toxin production and fate was embedded in the European Regional Seas Ecosystem Model (ERSEM), here simplified to describe a monospecific batch culture. Toxin production was assumed to be composed by two distinct additive terms; the first is a constant fraction of algal production and is assumed to take place at any physiological conditions. The second term is assumed to be dependent on algal biomass and to be stimulated by internal nutrient deficiency. By using these assumptions, the model reproduced the concentrations and temporal evolution of toxins observed in cultures of Ostreopsis cf. ovata, a benthic/epiphytic dinoflagellate producing C-rich toxins named ovatoxins. The analysis of simulations and their comparison with experimental data provided a conceptual model linking toxin production and nutritional status in this species. The model was also qualitatively validated by using independent literature data, and the results indicate that our formulation can be also used to simulate toxin dynamics in other dinoflagellates. Our model represents an important step towards the simulation and prediction of marine algal toxicity.
Vitrac, Olivier; Leblanc, Jean-Charles
2007-02-01
A generic methodology for the assessment of consumer exposure to substances migrating from packaging materials into foodstuffs during storage is presented. Consumer exposure at the level of individual households is derived from the probabilistic modeling of the contamination of all packed food product units (e.g. yogurt pot, milk bottle, etc.) consumed by a given household over 1 year. Exposure of a given population is estimated by gathering the exposure distributions of individual households to suitable weights (conveniently, household sizes). Calculations are made by combining (i) an efficient resolution of migration models and (ii) a methodology utilizing different sources of uncertainty and variability. The full procedure was applied to the assessment of consumer exposure to styrene from yogurt pots based on yearly purchase data of more than 5400 households in France (about 2 million yogurt pots) and an initial concentration c0 of styrene in yogurt pot walls, which is assumed to be normally distributed with an average value of 500 mg kg-1 and a standard deviation of 150 mg kg-1. Results are discussed regarding both sensitivity of the migration model to boundary conditions and household practices. By assuming a partition coefficient of 1 and a Biot number of 100, the estimated median household exposure to styrene ranged between 1 and 35 microg day-1 person-1 (5th and 95th percentiles) with a likely value of 12 microg day-1 person-1 (50th percentile). It was found that exposure does not vary independently with the average consumption rate and contact times. Thus, falsely assuming a uniform contact time equal to the sell-by-date for all yogurts overestimates significantly the daily exposure (5th and 95th percentiles of 2 and 110 microg day-1 person-1, respectively) since high consumers showed quicker turnover of stock.
The Impact of Storage on Processing: How Is Information Maintained in Working Memory?
ERIC Educational Resources Information Center
Vergauwe, Evie; Camos, Valérie; Barrouillet, Pierre
2014-01-01
Working memory is typically defined as a system devoted to the simultaneous maintenance and processing of information. However, the interplay between these 2 functions is still a matter of debate in the literature, with views ranging from complete independence to complete dependence. The time-based resource-sharing model assumes that a central…
Analysis of thrips distribution: application of spatial statistics and Kriging
John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard
1991-01-01
Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...
ERIC Educational Resources Information Center
Goschke, Thomas; Bolte, Annette
2012-01-01
Learning sequential structures is of fundamental importance for a wide variety of human skills. While it has long been debated whether implicit sequence learning is perceptual or response-based, here we propose an alternative framework that cuts across this dichotomy and assumes that sequence learning rests on associative changes that can occur…
ERIC Educational Resources Information Center
Choo, Suzanne S.
2014-01-01
When world literature as a subject was introduced to schools and colleges in the United States during the 1920s, its early curriculum was premised on the notion of bounded territoriality which assumes that identities of individuals, cultures, and nation-states are fixed, determinable, and independent. The intensification of global mobility in an…
ERIC Educational Resources Information Center
Henry, Kimberly L.; Muthen, Bengt
2010-01-01
Latent class analysis (LCA) is a statistical method used to identify subtypes of related cases using a set of categorical or continuous observed variables. Traditional LCA assumes that observations are independent. However, multilevel data structures are common in social and behavioral research and alternative strategies are needed. In this…
What Does the Brain Tell Us about the Mind?
ERIC Educational Resources Information Center
Ruz, Maria; Acero, Juan J.; Tudela, Pio
2006-01-01
The present paper explores the relevance that brain data have in constructing theories about the human mind. In the Cognitive Science era it was assumed that knowledge of the mind and the brain correspond to different levels of analysis. This independence among levels led to the epistemic argument that knowledge of the biological basis of…
ERIC Educational Resources Information Center
Meijer, Joost; Veenman, Marcel V. J.; van Hout-Wolters, Bernadette
2012-01-01
Studies about metacognition, intelligence and learning have rendered equivocal results. The mixed model assumes joint as well as independent influences of intelligence and metacognition on learning results. In this study, intelligence was measured by standard tests for reasoning, spatial ability and memory. Participants were 13-year-old school…
ERIC Educational Resources Information Center
Boerma, Tessel; Leseman, Paul; Timmermeister, Mona; Wijnen, Frank; Blom, Elma
2016-01-01
Background: Understanding and expressing a narrative's macro-structure is relatively independent of experience in a specific language. A narrative task is therefore assumed to be a less biased method of language assessment for bilingual children than many other norm-referenced tests and may thus be particularly valuable to identify language…
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
Using Data Augmentation and Markov Chain Monte Carlo for the Estimation of Unfolding Response Models
ERIC Educational Resources Information Center
Johnson, Matthew S.; Junker, Brian W.
2003-01-01
Unfolding response models, a class of item response theory (IRT) models that assume a unimodal item response function (IRF), are often used for the measurement of attitudes. Verhelst and Verstralen (1993)and Andrich and Luo (1993) independently developed unfolding response models by relating the observed responses to a more common monotone IRT…
ERIC Educational Resources Information Center
Saalbach, Henrik; Eckstein, Doris; Andri, Nicoletta; Hobi, Reto; Grabner, Roland H.
2013-01-01
Bilingual education programs implicitly assume that the acquired knowledge is represented in a language-independent way. This assumption, however, stands in strong contrast to research findings showing that information may be represented in a way closely tied to the specific language of instruction and learning. The present study aims to examine…
Halo-independence with quantified maximum entropy at DAMA/LIBRA
NASA Astrophysics Data System (ADS)
Fowlie, Andrew
2017-10-01
Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.
Eger, Evelyn; Dolan, Raymond; Henson, Richard N.
2009-01-01
It is often assumed that neural activity in face-responsive regions of primate cortex correlates with conscious perception of faces. However, whether such activity occurs without awareness is still debated. Using functional magnetic resonance imaging (fMRI) in conjunction with a novel masked face priming paradigm, we observed neural modulations that could not be attributed to perceptual awareness. More specifically, we found reduced activity in several classic face-processing regions, including the “fusiform face area,” “occipital face area,” and superior temporal sulcus, when a face was preceded by a briefly flashed image of the same face, relative to a different face, even when 2 images of the same face differed. Importantly, unlike most previous studies, which have minimized awareness by using conditions of inattention, the present results occurred when the stimuli (the primes) were attended. By contrast, when primes were perceived consciously, in a long-lag priming paradigm, we found repetition-related activity increases in additional frontal and parietal regions. These data not only demonstrate that fMRI activity in face-responsive regions can be modulated independently of perceptual awareness, but also document where such subliminal face-processing occurs (i.e., restricted to face-responsive regions of occipital and temporal cortex) and to what extent (i.e., independent of the specific image). PMID:18400791
[Atrial fibrillation in cerebrovascular disease: national neurological perspective].
Sargento-Freitas, Joao; Silva, Fernando; Koehler, Sebastian; Isidoro, Luís; Mendonça, Nuno; Machado, Cristina; Cordeiro, Gustavo; Cunha, Luís
2013-01-01
Cardioembolism due to atrial fibrillation assumes a dominant etiologic role in cerebrovascular diseases due to its growing incidence, high embolic risk and particular aspects of clinical events caused. Our objectives are to analyze the frequency of atrial fibrillation in patients with ischemic stroke, study the vital and functional impact of stroke due to different etiologies and evaluate antithrombotic options before and after stroke. We conducted a retrospective study including patients admitted in a central hospital due to ischemic stroke in 2010 (at least one year of follow-up). Etiology of stroke was defined using the Trial of ORG 10172 in Acute Stroke (TOAST) classification, and functional outcome by modified Rankin scale. We performed a descriptive analysis of different stroke etiologies and antithrombotic medication in patients with atrial fibrillation. We then conducted a cohort study to evaluate the clinical impact of antithrombotic options in secondary prevention after cardioembolic stroke. In our population (n = 631) we found superior frequency of cardioembolism (34.5%) to that reported in the literature. Mortality, morbidity and antithrombotic options are similar to other previous series, confirming the severity of cardioembolic strokes and the underuse of vitamin K antagonists. Oral anticoagulation was effective in secondary prevention independently from post-stroke functional condition. Despite unequivocal recommendations, oral anticoagulation is still underused in stroke prevention. This study confirms the clinical efficacy of vitamin K antagonists in secondary prevention independently from residual functional impairment.
Conditional Covariance Theory and DETECT for Polytomous Items. Research Report. ETS RR-04-50
ERIC Educational Resources Information Center
Zhang, Jinming
2004-01-01
This paper extends the theory of conditional covariances to polytomous items. It has been mathematically proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, is positive if the two items are dimensionally homogeneous and negative…
14 CFR 23.483 - One-wheel landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false One-wheel landing conditions. 23.483... Ground Loads § 23.483 One-wheel landing conditions. For the one-wheel landing condition, the airplane is assumed to be in the level attitude and to contact the ground on one side of the main landing gear. In...
14 CFR 23.497 - Supplementary conditions for tail wheels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Structure Ground Loads § 23.497 Supplementary conditions for tail wheels. In determining the ground loads on the tail wheel and affected supporting structures, the following apply: (a) For the obstruction load, the limit ground reaction obtained in the tail down landing condition is assumed to act up and aft...
14 CFR 23.497 - Supplementary conditions for tail wheels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Structure Ground Loads § 23.497 Supplementary conditions for tail wheels. In determining the ground loads on the tail wheel and affected supporting structures, the following apply: (a) For the obstruction load, the limit ground reaction obtained in the tail down landing condition is assumed to act up and aft...
14 CFR 23.481 - Tail down landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Tail down landing conditions. 23.481 Section 23.481 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Ground Loads § 23.481 Tail down landing conditions. (a) For a tail down landing, the airplane is assumed...
46 CFR 172.245 - Survival conditions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... conditions. A vessel is presumed to survive assumed damage if it meets the following conditions in the final..., and trim must be below the lower edge of an opening through which progressive flooding may take place... inches (50 mm) when the vessel is in the equilibrium position. (e) Progressive flooding. In the design...
46 CFR 172.245 - Survival conditions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... conditions. A vessel is presumed to survive assumed damage if it meets the following conditions in the final..., and trim must be below the lower edge of an opening through which progressive flooding may take place... inches (50 mm) when the vessel is in the equilibrium position. (e) Progressive flooding. In the design...
46 CFR 172.245 - Survival conditions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... conditions. A vessel is presumed to survive assumed damage if it meets the following conditions in the final..., and trim must be below the lower edge of an opening through which progressive flooding may take place... inches (50 mm) when the vessel is in the equilibrium position. (e) Progressive flooding. In the design...
14 CFR 27.479 - Level landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditions. (a) Attitudes. Under each of the loading conditions prescribed in paragraph (b) of this section, the rotorcraft is assumed to be in each of the following level landing attitudes: (1) An attitude in which all wheels contact the ground simultaneously. (2) An attitude in which the aft wheels contact the...
14 CFR 29.479 - Level landing conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditions. (a) Attitudes. Under each of the loading conditions prescribed in paragraph (b) of this section, the rotorcraft is assumed to be in each of the following level landing attitudes: (1) An attitude in which each wheel contacts the ground simultaneously. (2) An attitude in which the aft wheels contact the...
Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue; ...
2018-01-08
Wellbore leakage tops the list of perceived risks to the long-term geologic storage of CO 2, because wells provide a direct path between the CO 2 storage reservoir and the atmosphere. In this paper, we have coupled a two-phase flow model with our original framework that combined models for reactive transport of carbonated brine, geochemistry of reacting cement, and geomechanics to predict the permeability evolution of cement fractures. Additionally, this makes the framework suitable for field conditions in geological storage sites, permitting simulation of contact between cement and mixtures of brine and supercritical CO 2. Due to lack of conclusivemore » experimental data, we tried both linear and Corey relative permeability models to simulate flow of the two phases in cement fractures. The model also includes two options to account for the inconsistent experimental observations regarding cement reactivity with two-phase CO 2-brine mixtures. One option assumes that the reactive surface area is independent of the brine saturation and the second option assumes that the reactive surface area is proportional to the brine saturation. We have applied the model to predict the extent of cement alteration, the conditions under which fractures seal, the time it takes to seal a fracture, and the leakage rates of CO 2 and brine when damage zones in the wellbore are exposed to two-phase CO 2-brine mixtures. Initial brine residence time and the initial fracture aperture are critical parameters that affect the fracture sealing behavior. We also evaluated the importance of the model assumptions regarding relative permeability and cement reactivity. These results illustrate the need to understand how mixtures of carbon dioxide and brine flow through fractures and react with cement to make reasonable predictions regarding well integrity. For example, a reduction in the cement reactivity with two-phase CO 2-brine mixture can not only significantly increase the sealing time for fractures but may also prevent fracture sealing.« less
NASA Astrophysics Data System (ADS)
Steenstra, E. S.; Sitabi, A. B.; Lin, Y. H.; Rai, N.; Knibbe, J. S.; Berndt, J.; Matveev, S.; van Westrenen, W.
2017-09-01
We present 275 new metal-silicate partition coefficients for P, S, V, Cr, Mn, Co, Ni, Ge, Mo, and W obtained at moderate P (1.5 GPa) and high T (1683-1883 K). We investigate the effect of silicate melt composition using four end member silicate melt compositions. We identify possible silicate melt dependencies of the metal-silicate partitioning of lower valence elements Ni, Ge and V, elements that are usually assumed to remain unaffected by changes in silicate melt composition. Results for the other elements are consistent with the dependence of their metal-silicate partition coefficients on the individual major oxide components of the silicate melt composition suggested by recently reported parameterizations and theoretical considerations. Using multiple linear regression, we parameterize compiled metal-silicate partitioning results including our new data and report revised expressions that predict their metal-silicate partitioning behavior as a function of P-T-X-fO2. We apply these results to constrain the conditions that prevailed during core formation in the angrite parent body (APB). Our results suggest the siderophile element depletions in angrite meteorites are consistent with a CV bulk composition and constrain APB core formation to have occurred at mildly reducing conditions of 1.4 ± 0.5 log units below the iron-wüstite buffer (ΔIW), corresponding to a APB core mass of 18 ± 11%. The core mass range is constrained to 21 ± 8 mass% if light elements (S and/or C) are assumed to reside in the APB core. Incorporation of light elements in the APB core does not yield significantly different redox states for APB core-mantle differentiation. The inferred redox state is in excellent agreement with independent fO2 estimates recorded by pyroxene and olivine in angrites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue
Wellbore leakage tops the list of perceived risks to the long-term geologic storage of CO 2, because wells provide a direct path between the CO 2 storage reservoir and the atmosphere. In this paper, we have coupled a two-phase flow model with our original framework that combined models for reactive transport of carbonated brine, geochemistry of reacting cement, and geomechanics to predict the permeability evolution of cement fractures. Additionally, this makes the framework suitable for field conditions in geological storage sites, permitting simulation of contact between cement and mixtures of brine and supercritical CO 2. Due to lack of conclusivemore » experimental data, we tried both linear and Corey relative permeability models to simulate flow of the two phases in cement fractures. The model also includes two options to account for the inconsistent experimental observations regarding cement reactivity with two-phase CO 2-brine mixtures. One option assumes that the reactive surface area is independent of the brine saturation and the second option assumes that the reactive surface area is proportional to the brine saturation. We have applied the model to predict the extent of cement alteration, the conditions under which fractures seal, the time it takes to seal a fracture, and the leakage rates of CO 2 and brine when damage zones in the wellbore are exposed to two-phase CO 2-brine mixtures. Initial brine residence time and the initial fracture aperture are critical parameters that affect the fracture sealing behavior. We also evaluated the importance of the model assumptions regarding relative permeability and cement reactivity. These results illustrate the need to understand how mixtures of carbon dioxide and brine flow through fractures and react with cement to make reasonable predictions regarding well integrity. For example, a reduction in the cement reactivity with two-phase CO 2-brine mixture can not only significantly increase the sealing time for fractures but may also prevent fracture sealing.« less
Kingsnorth, S; King, G; McPherson, A; Jones-Galley, K
2015-05-01
Young people with physical disabilities experience issues regarding employment, schooling, independent living and establishing meaningful personal relationships. A lack of life skills has been recognized as an important factor contributing to this lag. The Independence Program (TIP) is a short-term residential life skills program that aims to equip youth with the foundational life skills required to assume adult roles. This study retrospectively examined the achievements, skills acquired and program attributions of youth and young adults who took part in this three-week immersive teen independence program over a 20-year period. A total of 162 past graduates were invited to take part, with 78 doing so (a 48% response rate). These past graduates completed an online survey assessing objective outcomes such as employment and independent living; subjective outcomes such as feeling in control and living meaningful lives; and reflections on skills acquired, opportunities experienced and attributions to TIP. The majority of respondents were female (71%), had a diagnosis of cerebral palsy (55%) and ranged from 20 to 35 years of age (92%). Despite a range of outcomes related to the achievement of adult roles, high levels of life satisfaction and overall quality of life were reported. Nearly every respondent reported using the skills they learned at the program in their lives afterwards and a high percentage attributed the acquisition and consolidation of core life skills to participating in this intensive immersive program. Although causality cannot be assumed, respondents reflected very positively on the opportunities provided by TIP to develop their independent living and life skills, extend their social networks and understand their strengths and weaknesses. Such findings validate the importance of targeted skill development to assist young people with physical disabilities in attaining their life goals and encourage focused investigations of key features in program design. © 2014 John Wiley & Sons Ltd.
Power spectrum model of visual masking: simulations and empirical data.
Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M
2013-06-01
In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taddei, Laura; Amendola, Luca, E-mail: laura.taddei@fis.unipr.it, E-mail: l.amendola@thphys.uni-heidelberg.de
Most cosmological constraints on modified gravity are obtained assuming that the cosmic evolution was standard ΛCDM in the past and that the present matter density and power spectrum normalization are the same as in a ΛCDM model. Here we examine how the constraints change when these assumptions are lifted. We focus in particular on the parameter Y (also called G{sub eff}) that quantifies the deviation from the Poisson equation. This parameter can be estimated by comparing with the model-independent growth rate quantity fσ{sub 8}(z) obtained through redshift distortions. We reduce the model dependency in evaluating Y by marginalizing over σ{submore » 8} and over the initial conditions, and by absorbing the degenerate parameter Ω{sub m,0} into Y. We use all currently available values of fσ{sub 8}(z). We find that the combination Y-circumflex =YΩ{sub m,0}, assumed constant in the observed redshift range, can be constrained only very weakly by current data, Y-circumflex =0.28{sub −0.23}{sup +0.35} at 68% c.l. We also forecast the precision of a future estimation of Y-circumflex in a Euclid-like redshift survey. We find that the future constraints will reduce substantially the uncertainty, Y-circumflex =0.30{sub −0.09}{sup +0.08} , at 68% c.l., but the relative error on Y-circumflex around the fiducial remains quite high, of the order of 30%. The main reason for these weak constraints is that Y-circumflex is strongly degenerate with the initial conditions, so that large or small values of Y-circumflex are compensated by choosing non-standard initial values of the derivative of the matter density contrast. Finally, we produce a forecast of a cosmological exclusion plot on the Yukawa strength and range parameters, which complements similar plots on laboratory scales but explores scales and epochs reachable only with large-scale galaxy surveys. We find that future data can constrain the Yukawa strength to within 3% of the Newtonian one if the range is around a few Megaparsecs. In the particular case of f(R) models, we find that the Yukawa range will be constrained to be larger than 80 Mpc/h or smaller than 2 Mpc/h (95% c.l.), regardless of the specific f(R) model.« less
The Pavlovian analysis of instrumental conditioning.
Gormezano, I; Tait, R W
1976-01-01
An account was given of the development within the Russian literature of a uniprocess formulation of classical and instrumental conditioning, known as the bidirectional conditioning hypothesis. The hypothesis purports to offer a single set of Pavlovian principles to account for both paradigms, based upon a neural model which assumes that bidirectional (forward and backward) connections are formed in both calssical and instrumental conditioning situations. In instrumental conditioning, the bidirectional connections are hypothesized to be simply more complex than those in classical conditioning, and any differences in empirical functions are presumed to lie not in difference in mechanism, but in the strength of the forward and backward connections. Although bidirectional connections are assumed to develop in instrumental conditioning, the experimental investigation of the bidirectional conditioning hypothesis has been essentially restricted to the classical conditioning operations of pairing two CSs (sensory preconditioning training), a US followed by a CS (backward conditioning training) and two USs. However, the paradigm involving the pairing of two USs, because of theoretical and analytical considerations, is the one most commonly employed by Russian investigators. The results of an initial experiment involving the pairing of two USs, and reference to the results of a more extensive investigation, leads us to tentatively question the validity of the bidirectional conditioning account of instrumental conditioning.
Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981
NASA Technical Reports Server (NTRS)
Kafie, Kurosh
1991-01-01
An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.
Decentring and distraction reduce overgeneral autobiographical memory in depression.
Watkins, E; Teasdale, J D; Williams, R M
2000-07-01
Increased recall of categorical autobiographical memories is a phenomenon unique to depression and post-traumatic stress disorder, and is associated with a poor prognosis for depression. Although the elevated recall of categorical memories does not change on remission from depression, recent findings suggest that overgeneral memory may be reduced by cognitive interventions and maintained by rumination. This study tested whether cognitive manipulations could influence the recall of categorical memories in dysphoric participants. Forty-eight dysphoric and depressed participants were randomly allocated to rumination or distraction conditions. Before and after the manipulation, participants completed the Autobiographical Memory Test, a standard measure of overgeneral memory. Participants were then randomized to either a 'decentring' question (Socratic questions designed to facilitate viewing moods within a wider perspective) or a control question condition, before completing the Autobiographical Memory Test again. Distraction produced significantly greater decreases in the proportion of memories retrieved that were categorical than rumination. Decentring questions produced significantly greater decreases in the proportion of memories retrieved that were categorical than control questions, with this effect independent of the prior manipulation. Elevated categorical memory in depression is more modifiable than has been previously assumed; it may reflect the dynamic maintenance of a cognitive style that can be interrupted by brief cognitive interventions.
Recursive recovery of Markov transition probabilities from boundary value data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patch, Sarah Kathyrn
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less
Afterimages are biased by top-down information.
Utz, Sandra; Carbon, Claus-Christian
2015-01-01
The afterimage illusion refers to a complementary colored image continuing to appear in the observer's vision after the exposure to the original image has ceased. It is assumed to be a phenomenon of the primary visual pathway, caused by overstimulation of photoreceptors of the retina. The aim of the present study was to investigate the nature of afterimage perceptions; mainly whether it is a mere physical, that is, low-level effect or whether it can be modulated by top-down processes, that is, high-level processes. Participants were first exposed to five either strongly female or male faces (Experiment 1), objects highly associated with female or male gender (Experiment 2) or female versus male names (Experiment 3), followed by a negativated image of a gender-neutral face which had to be fixated for 20s to elicit an afterimage. Participants had to rate their afterimages according to sexual dimorphism, showing that the afterimage of the gender-neutral face was perceived as significantly more female in the female priming condition compared with the male priming condition, independently of the priming quality (faces, objects, and names). Our results documented, in addition to previously presumed bottom-up mechanisms, a prominent influence of top-down processing on the perception of afterimages via priming mechanisms (female primes led to more female afterimage perception). © The Author(s) 2015.
Contribution to irradiation creep arising from gas-driven bubbles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, C.H.; Garner, F.A.
1998-03-01
In a previous paper the relationship was defined between void swelling and irradiation creep arising from the interaction of the SIPA and SIG creep-driven deformation and swelling-driven deformation was highly interactive in nature, and that the two contributions could not be independently calculated and then considered as directly additive. This model could be used to explain the recent experimental observation that the creep-swelling coupling coefficient was not a constant as previously assumed, but declined continuously as the swelling rate increased. Such a model thereby explained the creep-disappearance and creep-damping anomalies observed in conditions where significant void swelling occurred before substantialmore » creep deformation developed. At lower irradiation temperatures and high helium/hydrogen generation rates, such as found in light water cooled reactors and some fusion concepts, gas-filled cavities that have not yet exceeded the critical radius for bubble-void conversion should also exert an influence on irradiation creep. In this paper the original concept is adapted to include such conditions, and its predictions then compared with available data. It is shown that a measurable increase in the creep rate is expected compared to the rate found in low gas-generating environments. The creep rate is directly related to the gas generation rate and thereby to the neutron flux and spectrum.« less
Grzyb, Kai Robin; Hübner, Ronald
2013-01-01
The size of response-repetition (RR) costs, which are usually observed on task-switch trials, strongly varies between conditions with univalent and bivalent stimuli. To test whether top-down or bottom-up processes can account for this effect, we assessed in Experiment 1 baselines for univalent and bivalent stimulus conditions (i.e., for stimuli that are associated with either 1 or 2 tasks). Experiment 2 examined whether the proportion of these stimulus types affects RR costs. As the size of RR costs was independent of proportion, a top-down explanation could be excluded. However, there was an increase in RR costs if the current stimulus induced a response conflict. To account for this effect, we proposed an amplification of response conflict account. It assumes that the basic mechanism that leads to RR costs amplifies response conflict, which, in turn, increases RR costs. Experiment 3 confirmed this bottom-up explanation by showing that the increase in RR costs varies with previous-trial congruency, which is known to affect RR costs. Experiment 4 showed that the increase can also be found with univalent stimuli that induce response conflict. Altogether, the results are in line with a response inhibition account of RR costs. Implications for alternative accounts are also discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Santos, Mauro; Castañeda, Luis E; Rezende, Enrico L
2012-01-01
The potential of populations to evolve in response to ongoing climate change is partly conditioned by the presence of heritable genetic variation in relevant physiological traits. Recent research suggests that Drosophila melanogaster exhibits negligible heritability, hence little evolutionary potential in heat tolerance when measured under slow heating rates that presumably mimic conditions in nature. Here, we study the effects of directional selection for increased heat tolerance using Drosophila as a model system. We combine a physiological model to simulate thermal tolerance assays with multilocus models for quantitative traits. Our simulations show that, whereas the evolutionary response of the genetically determined upper thermal limit (CTmax) is independent of methodological context, the response in knockdown temperatures varies with measurement protocol and is substantially (up to 50%) lower than for CTmax. Realized heritabilities of knockdown temperature may grossly underestimate the true heritability of CTmax. For instance, assuming that the true heritability of CTmax in the base population is h2 = 0.25, realized heritabilities of knockdown temperature are around 0.08–0.16 depending on heating rate. These effects are higher in slow heating assays, suggesting that flawed methodology might explain the apparently limited evolutionary potential of cosmopolitan D. melanogaster. PMID:23170220
Collapse of resilience patterns in generalized Lotka-Volterra dynamics and beyond.
Tu, Chengyi; Grilli, Jacopo; Schuessler, Friedrich; Suweis, Samir
2017-06-01
Recently, a theoretical framework aimed at separating the roles of dynamics and topology in multidimensional systems has been developed [Gao et al., Nature (London) 530, 307 (2016)10.1038/nature16948]. The validity of their method is assumed to hold depending on two main hypotheses: (i) The network determined by the the interaction between pairs of nodes has negligible degree correlations; (ii) the node activities are uniform across nodes on both the drift and the pairwise interaction functions. Moreover, the authors consider only positive (mutualistic) interactions. Here we show the conditions proposed by Gao and collaborators [Nature (London) 530, 307 (2016)10.1038/nature16948] are neither sufficient nor necessary to guarantee that their method works in general and validity of their results are not independent of the model chosen within the class of dynamics they considered. Indeed we find that a new condition poses effective limitations to their framework and we provide quantitative predictions of the quality of the one-dimensional collapse as a function of the properties of interaction networks and stable dynamics using results from random matrix theory. We also find that multidimensional reduction may work also for an interaction matrix with a mixture of positive and negative signs, opening up an application of the framework to food webs, neuronal networks, and social and economic interactions.
Confined active Brownian particles: theoretical description of propulsion-induced accumulation
NASA Astrophysics Data System (ADS)
Das, Shibananda; Gompper, Gerhard; Winkler, Roland G.
2018-01-01
The stationary-state distribution function of confined active Brownian particles (ABPs) is analyzed by computer simulations and analytical calculations. We consider a radial harmonic as well as an anharmonic confinement potential. In the simulations, the ABP is propelled with a prescribed velocity along a body-fixed direction, which is changing in a diffusive manner. For the analytical approach, the Cartesian components of the propulsion velocity are assumed to change independently; active Ornstein-Uhlenbeck particle (AOUP). This results in very different velocity distribution functions. The analytical solution of the Fokker-Planck equation for an AOUP in a harmonic potential is presented and a conditional distribution function is provided for the radial particle distribution at a given magnitude of the propulsion velocity. This conditional probability distribution facilitates the description of the coupling of the spatial coordinate and propulsion, which yields activity-induced accumulation of particles. For the anharmonic potential, a probability distribution function is derived within the unified colored noise approximation. The comparison of the simulation results with theoretical predictions yields good agreement for large rotational diffusion coefficients, e.g. due to tumbling, even for large propulsion velocities (Péclet numbers). However, we find significant deviations already for moderate Péclet number, when the rotational diffusion coefficient is on the order of the thermal one.
Collapse of resilience patterns in generalized Lotka-Volterra dynamics and beyond
NASA Astrophysics Data System (ADS)
Tu, Chengyi; Grilli, Jacopo; Schuessler, Friedrich; Suweis, Samir
2017-06-01
Recently, a theoretical framework aimed at separating the roles of dynamics and topology in multidimensional systems has been developed [Gao et al., Nature (London) 530, 307 (2016), 10.1038/nature16948]. The validity of their method is assumed to hold depending on two main hypotheses: (i) The network determined by the the interaction between pairs of nodes has negligible degree correlations; (ii) the node activities are uniform across nodes on both the drift and the pairwise interaction functions. Moreover, the authors consider only positive (mutualistic) interactions. Here we show the conditions proposed by Gao and collaborators [Nature (London) 530, 307 (2016), 10.1038/nature16948] are neither sufficient nor necessary to guarantee that their method works in general and validity of their results are not independent of the model chosen within the class of dynamics they considered. Indeed we find that a new condition poses effective limitations to their framework and we provide quantitative predictions of the quality of the one-dimensional collapse as a function of the properties of interaction networks and stable dynamics using results from random matrix theory. We also find that multidimensional reduction may work also for an interaction matrix with a mixture of positive and negative signs, opening up an application of the framework to food webs, neuronal networks, and social and economic interactions.
A Component-Based Diffusion Model With Structural Diversity for Social Networks.
Qing Bao; Cheung, William K; Yu Zhang; Jiming Liu
2017-04-01
Diffusion on social networks refers to the process where opinions are spread via the connected nodes. Given a set of observed information cascades, one can infer the underlying diffusion process for social network analysis. The independent cascade model (IC model) is a widely adopted diffusion model where a node is assumed to be activated independently by any one of its neighbors. In reality, how a node will be activated also depends on how its neighbors are connected and activated. For instance, the opinions from the neighbors of the same social group are often similar and thus redundant. In this paper, we extend the IC model by considering that: 1) the information coming from the connected neighbors are similar and 2) the underlying redundancy can be modeled using a dynamic structural diversity measure of the neighbors. Our proposed model assumes each node to be activated independently by different communities (or components) of its parent nodes, each weighted by its effective size. An expectation maximization algorithm is derived to infer the model parameters. We compare the performance of the proposed model with the basic IC model and its variants using both synthetic data sets and a real-world data set containing news stories and Web blogs. Our empirical results show that incorporating the community structure of neighbors and the structural diversity measure into the diffusion model significantly improves the accuracy of the model, at the expense of only a reasonable increase in run-time.
Dynamics of circumstellar disks. III. The case of GG Tau A
Nelson, Andrew F.; Marzari, Francesco
2016-08-11
Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less
Asymptotic Solutions for Optical Properties of Large Particles with Strong Absorption
NASA Technical Reports Server (NTRS)
Yang, Ping; Gao, Bo-Cai; Baum, Bryan A.; Hu, Yong X.; Wiscombe, Warren J.; Mishchenko, Michael I.; Winker, Dave M.; Nasiri, Shaima L.; Einaudi, Franco (Technical Monitor)
2000-01-01
For scattering calculations involving nonspherical particles such as ice crystals, we show that the transverse wave condition is not applicable to the refracted electromagnetic wave in the context of geometric optics when absorption is involved. Either the TM wave condition (i.e., where the magnetic field of the refracted wave is transverse with respect to the wave direction) or the TE wave condition (i.e., where the electric field is transverse with respect to the propagating direction of the wave) may be assumed for the refracted wave in an absorbing medium to locally satisfy the electromagnetic boundary condition in the ray tracing calculation. The wave mode assumed for the refracted wave affects both the reflection and refraction coefficients. As a result, a nonunique solution for these coefficients is derived from the electromagnetic boundary condition. In this study we have identified the appropriate solution for the Fresnel reflection/refraction coefficients in light scattering calculation based on the ray tracing technique. We present the 3 x 2 refraction or transmission matrix that completely accounts for the inhomogeneity of the refracted wave in an absorbing medium. Using the Fresnel coefficients for an absorbing medium, we derive an asymptotic solution in an analytical format for the scattering properties of a general polyhedral particle. Numerical results are presented for hexagonal plates and columns with both preferred and random orientations. The asymptotic theory can produce reasonable accuracy in the phase function calculations in the infrared window region (wavelengths near 10 micron) if the particle size (in diameter) is on the order of 40 micron or larger. However, since strong absorption is assumed in the computation of the single-scattering albedo in the asymptotic theory, the single scattering albedo does not change with variation of the particle size. As a result, the asymptotic theory can lead to substantial errors in the computation of single-scattering albedo for small and moderate particle sizes. However, from comparison of the asymptotic results with the FDTD solution, it is expected that a convergence between the FDTD results and the asymptotic theory results can be reached when the particle size approaches 200 micron. We show that the phase function at side-scattering and backscattering angles is insensitive to particle shape if the random orientation condition is assumed. However, if preferred orientations are assumed for particles, the phase function has a strong dependence on scattering azimuthal angle. The single-scattering albedo also shows very strong dependence on the inclination angle of incident radiation with respect to the rotating axis for the preferred particle orientations.
Properties of the Bivariate Delayed Poisson Process
1974-07-01
and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a
ERIC Educational Resources Information Center
Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan
2014-01-01
C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…
An effective wind speed for models of fire spread
Ralph M. Nelson
2002-01-01
In previous descriptions of wind-slope interaction and the spread rate of wildland fires it is assumed that the separate effects of wind and slope are independent and additive and that corrections for these effects may be applied to spread rates computed from existing rate of spread models. A different approach is explored in the present paper in which the upslope...
Patriots-in-Training: Spanish American Children at Hazelwood School in England during the 1820s
ERIC Educational Resources Information Center
Racine, Karen
2010-01-01
Although the Spanish American independence movements are reflexively assumed to have been inspired by the American and French Revolutions, the patriot leaders actually looked toward Great Britain for much of their inspiration and material support. One of their most cherished social goals was to reform and uplift their education systems and to that…
ERIC Educational Resources Information Center
Mesmer-Magnus, Jessica R.; Viswesvaran, Chockalingam
2005-01-01
The overlap between measures of work-to-family (WFC) and family-to-work conflict (FWC) was meta-analytically investigated. Researchers have assumed WFC and FWC to be distinct, however, this assumption requires empirical verification. Across 25 independent samples (total N=9079) the sample size weighted mean observed correlation was .38 and the…
Effects of stand density on top height estimation for ponderosa pine
Martin Ritchie; Jianwei Zhang; Todd Hamilton
2012-01-01
Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...
Changing views of Cajal's neuron: the case of the dendritic spine.
Segal, Menahem
2002-01-01
Ever since dendritic spines were first described in detail by Santiago Ramón y Cajal, they were assumed to underlie the physical substrate of long term memory in the brain. Recent time-lapse imaging of dendritic spines in live tissue, using confocal microscopy, have revealed an amazingly plastic structure, which undergoes continuous changes in shape and size, not intuitively related to its assumed role in long term memory. Functionally, the spine is shown to be an independent cellular compartment, able to regulate calcium concentration independently of its parent dendrite. The shape of the spine is instrumental in regulating the link between the synapse and the parent dendrite such that longer spines have less impact on the dendrite than shorter ones. The spine can be formed, change its shape and disappear in response to afferent stimulation, in a dynamic fashion, indicating that spine morphology is an important vehicle for structuring synaptic interactions. While this role is crucial in the developing nervous system, large variations in spine densities in the adult brain indicate that tuning of synaptic impact may be a role of spines throughout the life of a neuron.
Multiple Component Event-Related Potential (mcERP) Estimation
NASA Technical Reports Server (NTRS)
Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)
2002-01-01
We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slovik, G.C.
1981-08-01
A new three region steam drum model has been developed. This model differs from previous works for it assumes the existence of three regions within the steam drum: a steam region, a mid region (assumed to be under saturation conditions at steady state), and a bottom region (having a mixed mean subcooled enthalpy).
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Correlations Between the Contributions of Individual IVS Analysis Centers
NASA Technical Reports Server (NTRS)
Bockmann, Sarah; Artz, Thomas; Nothnagel, Axel
2010-01-01
Within almost all space-geodetic techniques, contributions of different analysis centers (ACs) are combined in order to improve the robustness of the final product. So far, the contributing series are assumed to be independent as each AC processes the observations in different ways. However, the series cannot be completely independent as each analyst uses the same set of original observations and many applied models are subject to conventions used by each AC. In this paper, it is shown that neglecting correlations between the contributing series yields too optimistic formal errors and small, but insignificant, errors in the estimated parameters derived from the adjustment of the combined solution.
Optimal message log reclamation for independent checkpointing
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Fuchs, W. Kent
1993-01-01
Independent (uncoordinated) check pointing for parallel and distributed systems allows maximum process autonomy but suffers from possible domino effects and the associated storage space overhead for maintaining multiple checkpoints and message logs. In most research on check pointing and recovery, it was assumed that only the checkpoints and message logs older than the global recovery line can be discarded. It is shown how recovery line transformation and decomposition can be applied to the problem of efficiently identifying all discardable message logs, thereby achieving optimal garbage collection. Communication trace-driven simulation for several parallel programs is used to show the benefits of the proposed algorithm for message log reclamation.
Experimental measurement-device-independent quantum digital signatures over a metropolitan network
NASA Astrophysics Data System (ADS)
Yin, Hua-Lei; Wang, Wei-Long; Tang, Yan-Lin; Zhao, Qi; Liu, Hui; Sun, Xiang-Xiang; Zhang, Wei-Jun; Li, Hao; Puthoor, Ittoop Vergheese; You, Li-Xing; Andersson, Erika; Wang, Zhen; Liu, Yang; Jiang, Xiao; Ma, Xiongfeng; Zhang, Qiang; Curty, Marcos; Chen, Teng-Yun; Pan, Jian-Wei
2017-04-01
Quantum digital signatures (QDSs) provide a means for signing electronic communications with information-theoretic security. However, all previous demonstrations of quantum digital signatures assume trusted measurement devices. This renders them vulnerable against detector side-channel attacks, just like quantum key distribution. Here we exploit a measurement-device-independent (MDI) quantum network, over a metropolitan area, to perform a field test of a three-party MDI QDS scheme that is secure against any detector side-channel attack. In so doing, we are able to successfully sign a binary message with a security level of about 10-7. Remarkably, our work demonstrates the feasibility of MDI QDSs for practical applications.
Source separation on hyperspectral cube applied to dermatology
NASA Astrophysics Data System (ADS)
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
Experimental Measurement-Device-Independent Entanglement Detection
NASA Astrophysics Data System (ADS)
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-02-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols.
Experimental Measurement-Device-Independent Entanglement Detection
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-01-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664
Influence of water mist on propagation and suppression of laminar premixed flame
NASA Astrophysics Data System (ADS)
Belyakov, Nikolay S.; Babushok, Valeri I.; Minaev, Sergei S.
2018-03-01
The combustion of premixed gas mixtures containing micro droplets of water was studied using one-dimensional approximation. The dependencies of the burning velocity and flammability limits on the initial conditions and on the properties of liquid droplets were analyzed. Effects of droplet size and concentration of added liquid were studied. It was demonstrated that the droplets with smaller diameters are more effective in reducing the flame velocity. For droplets vaporizing in the reaction zone, the burning velocity is independent of droplet size, and it depends only on the concentration of added liquid. With further increase of the droplet diameter the droplets are passing through the reaction zone with completion of vaporization in the combustion products. It was demonstrated that for droplets above a certain size there are two stable stationary modes of flame propagation with transition of hysteresis type. The critical conditions of the transition are due to the appearance of the temperature maximum at the flame front and the temperature gradient with heat losses from the reaction zone to the products, as a result of droplet vaporization passing through the reaction zone. The critical conditions are similar to the critical conditions of the classical flammability limits of flame with the thermal mechanism of flame propagation. The maximum decrease in the burning velocity and decrease in the combustion temperature at the critical turning point corresponds to predictions of the classical theories of flammability limits of Zel'dovich and Spalding. The stability analysis of stationary modes of flame propagation in the presence of water mist showed the lack of oscillatory processes in the frames of the assumed model.
Effect of high strain rates on peak stress in a Zr-based bulk metallic glass
NASA Astrophysics Data System (ADS)
Sunny, George; Yuan, Fuping; Prakash, Vikas; Lewandowski, John
2008-11-01
The mechanical behavior of Zr41.25Ti13.75Cu12.5Ni10Be22.5 (LM-1) has been extensively characterized under quasistatic loading conditions; however, its mechanical behavior under dynamic loading conditions is currently not well understood. A Split-Hopkinson pressure bar (SHPB) and a single-stage gas gun are employed to characterize the mechanical behavior of LM-1 in the strain-rate regime of 102-105/s. The SHPB experiments are conducted with a tapered insert design to mitigate the effects of stress concentrations and preferential failure at the specimen-insert interface. The higher strain-rate plate-impact compression-and-shear experiments are conducted by impacting a thick tungsten carbide (WC) flyer plate with a sandwich sample comprising a thin bulk metallic glass specimen between two thicker WC target plates. Specimens employed in the SHPB experiments failed in the gage-section at a peak stress of approximately 1.8 GPa. Specimens in the high strain-rate plate-impact experiments exhibited a flow stress in shear of approximately 0.9 GPa, regardless of the shear strain-rate. The flow stress under the plate-impact conditions was converted to an equivalent flow stress under uniaxial compression by assuming a von Mises-like material behavior and accounting for the plane strain conditions. The results of these experiments, when compared to the previous work conducted at quasistatic loading rates, indicate that the peak stress of LM-1 is essentially strain rate independent over the strain-rate range up to 105/s.
Using Correlation to Compute Better Probability Estimates in Plan Graphs
NASA Technical Reports Server (NTRS)
Bryce, Daniel; Smith, David E.
2006-01-01
Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.
The oxygen-18 isotope approach for measuring aquatic metabolism in high-productivity waters
Tobias, C.R.; Böhlke, J.K.; Harvey, J.W.
2007-01-01
We examined the utility of ??18O2 measurements in estimating gross primary production (P), community respiration (R), and net metabolism (P:R) through diel cycles in a productive agricultural stream located in the midwestern U.S.A. Large diel swings in O2 (??200 ??mol L-1) were accompanied by large diel variation in ??18O2 (??10???). Simultaneous gas transfer measurements and laboratory-derived isotopic fractionation factors for O2 during respiration (??r) were used in conjunction with the diel monitoring of O2 and ??18O2 to calculate P, R, and P:R using three independent isotope-based methods. These estimates were compared to each other and against the traditional "open-channel diel O2-change" technique that lacked ??18O2. A principal advantage of the ??18O2 measurements was quantification of diel variation in R, which increased by up to 30% during the day, and the diel pattern in R was variable and not necessarily predictable from assumed temperature effects on R. The P, R, and P:R estimates calculated using the isotope-based approaches showed high sensitivity to the assumed system fractionation factor (??r). The optimum modeled ??r values (0.986-0.989) were roughly consistent with the laboratory-derived values, but larger (i.e., less fractionation) than ??r values typically reported for enzyme-limited respiration in open water environments. Because of large diel variation in O2, P:R could not be estimated by directly applying the typical steady-state solution to the O2 and 18O-O2 mass balance equations in the absence of gas transfer data. Instead, our results indicate that a modified steady-state solution (the daily mean value approach) could be used with time-averaged O2 and ??18O2 measurements to calculate P:R independent of gas transfer. This approach was applicable under specifically defined, net heterotrophic conditions. The diel cycle of increasing daytime R and decreasing nighttime R was only partially explained by temperature variation, but could be consistent with the diel production/consumption of labile dissolved organic carbon from photosynthesis. ?? 2007, by the American Society of Limnology and Oceanography, Inc.
Xu, Xiaole; Chen, Shengyong
2014-01-01
This paper investigates the finite-time consensus problem of leader-following multiagent systems. The dynamical models for all following agents and the leader are assumed the same general form of linear system, and the interconnection topology among the agents is assumed to be switching and undirected. We mostly consider the continuous-time case. By assuming that the states of neighbouring agents are known to each agent, a sufficient condition is established for finite-time consensus via a neighbor-based state feedback protocol. While the states of neighbouring agents cannot be available and only the outputs of neighbouring agents can be accessed, the distributed observer-based consensus protocol is proposed for each following agent. A sufficient condition is provided in terms of linear matrix inequalities to design the observer-based consensus protocol, which makes the multiagent systems achieve finite-time consensus under switching topologies. Then, we discuss the counterparts for discrete-time case. Finally, we provide an illustrative example to show the effectiveness of the design approach. PMID:24883367
Device-independent quantum private query
NASA Astrophysics Data System (ADS)
Maitra, Arpita; Paul, Goutam; Roy, Sarbani
2017-04-01
In quantum private query (QPQ), a client obtains values corresponding to his or her query only, and nothing else from the server, and the server does not get any information about the queries. V. Giovannetti et al. [Phys. Rev. Lett. 100, 230502 (2008)], 10.1103/PhysRevLett.100.230502 gave the first QPQ protocol and since then quite a few variants and extensions have been proposed. However, none of the existing protocols are device independent; i.e., all of them assume implicitly that the entangled states supplied to the client and the server are of a certain form. In this work, we exploit the idea of a local CHSH game and connect it with the scheme of Y. G. Yang et al. [Quantum Info. Process. 13, 805 (2014)], 10.1007/s11128-013-0692-8 to present the concept of a device-independent QPQ protocol.
Halo-independence with quantified maximum entropy at DAMA/LIBRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com
2017-10-01
Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian.more » By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.« less
A new device-independent dimension witness and its experimental implementation
NASA Astrophysics Data System (ADS)
Cai, Yu; Bancal, Jean-Daniel; Romero, Jacquiline; Scarani, Valerio
2016-07-01
A dimension witness is a criterion that sets a lower bound on the dimension needed to reproduce the observed data. Three types of dimension witnesses can be found in the literature: device-dependent ones, in which the bound is obtained assuming some knowledge on the state and the measurements; device-independent prepare-and-measure ones, that can be applied to any system including classical ones; and device-independent Bell-based ones, that certify the minimal dimension of some entangled systems. Here we consider the Collins-Gisin-Linden-Massar-Popescu Bell-type inequality for four outcomes. We show that a sufficiently high violation of this inequality witnesses d≥slant 4 and present a proof-of-principle experimental observation of such a violation. This presents a first experimental violation of the third type of dimension witness beyond qutrits.
Model-independent curvature determination with 21 cm intensity mapping experiments
NASA Astrophysics Data System (ADS)
Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda
2018-06-01
Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.
Model-independent curvature determination with 21cm intensity mapping experiments
NASA Astrophysics Data System (ADS)
Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda
2018-04-01
Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21cm intensity mapping experiments such as HIRAX are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.
Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi
2014-01-15
To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
A framework for analyzing contagion in assortative banking networks
Hurd, Thomas R.; Gleeson, James P.; Melnik, Sergey
2017-01-01
We introduce a probabilistic framework that represents stylized banking networks with the aim of predicting the size of contagion events. Most previous work on random financial networks assumes independent connections between banks, whereas our framework explicitly allows for (dis)assortative edge probabilities (i.e., a tendency for small banks to link to large banks). We analyze default cascades triggered by shocking the network and find that the cascade can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. We derive a cascade condition, analogous to the basic reproduction number R0 in epidemic modelling, that characterizes whether or not a single initially defaulted bank can trigger a cascade that extends to a finite fraction of the infinite network. This cascade condition is an easily computed measure of the systemic risk inherent in a given banking network topology. We use percolation theory for random networks to derive a formula for the frequency of global cascades. These analytical results are shown to provide limited quantitative agreement with Monte Carlo simulation studies of finite-sized networks. We show that edge-assortativity, the propensity of nodes to connect to similar nodes, can have a strong effect on the level of systemic risk as measured by the cascade condition. However, the effect of assortativity on systemic risk is subtle, and we propose a simple graph theoretic quantity, which we call the graph-assortativity coefficient, that can be used to assess systemic risk. PMID:28231324
Nees, Frauke; Vollstädt-Klein, Sabine; Fauth-Bühler, Mira; Steiner, Sabina; Mann, Karl; Poustka, Luise; Banaschewski, Tobias; Büchel, Christian; Conrod, Patricia J; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Artiges, Eric; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Struve, Maren; Loth, Eva; Schumann, Gunter; Flor, Herta
2012-11-01
Adolescence is a transition period that is assumed to be characterized by increased sensitivity to reward. While there is growing research on reward processing in adolescents, investigations into the engagement of brain regions under different reward-related conditions in one sample of healthy adolescents, especially in a target age group, are missing. We aimed to identify brain regions preferentially activated in a reaction time task (monetary incentive delay (MID) task) and a simple guessing task (SGT) in a sample of 14-year-old adolescents (N = 54) using two commonly used reward paradigms. Functional magnetic resonance imaging was employed during the MID with big versus small versus no win conditions and the SGT with big versus small win and big versus small loss conditions. Analyses focused on changes in blood oxygen level-dependent contrasts during reward and punishment processing in anticipation and feedback phases. We found clear magnitude-sensitive response in reward-related brain regions such as the ventral striatum during anticipation in the MID task, but not in the SGT. This was also true for reaction times. The feedback phase showed clear reward-related, but magnitude-independent, response patterns, for example in the anterior cingulate cortex, in both tasks. Our findings highlight neural and behavioral response patterns engaged in two different reward paradigms in one sample of 14-year-old healthy adolescents and might be important for reference in future studies investigating reward and punishment processing in a target age group.
A framework for analyzing contagion in assortative banking networks.
Hurd, Thomas R; Gleeson, James P; Melnik, Sergey
2017-01-01
We introduce a probabilistic framework that represents stylized banking networks with the aim of predicting the size of contagion events. Most previous work on random financial networks assumes independent connections between banks, whereas our framework explicitly allows for (dis)assortative edge probabilities (i.e., a tendency for small banks to link to large banks). We analyze default cascades triggered by shocking the network and find that the cascade can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. We derive a cascade condition, analogous to the basic reproduction number R0 in epidemic modelling, that characterizes whether or not a single initially defaulted bank can trigger a cascade that extends to a finite fraction of the infinite network. This cascade condition is an easily computed measure of the systemic risk inherent in a given banking network topology. We use percolation theory for random networks to derive a formula for the frequency of global cascades. These analytical results are shown to provide limited quantitative agreement with Monte Carlo simulation studies of finite-sized networks. We show that edge-assortativity, the propensity of nodes to connect to similar nodes, can have a strong effect on the level of systemic risk as measured by the cascade condition. However, the effect of assortativity on systemic risk is subtle, and we propose a simple graph theoretic quantity, which we call the graph-assortativity coefficient, that can be used to assess systemic risk.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
A frequency quantum interpretation of the surface renewal model of mass transfer
Mondal, Chanchal
2017-01-01
The surface of a turbulent liquid is visualized as consisting of a large number of chaotic eddies or liquid elements. Assuming that surface elements of a particular age have renewal frequencies that are integral multiples of a fundamental frequency quantum, and further assuming that the renewal frequency distribution is of the Boltzmann type, performing a population balance for these elements leads to the Danckwerts surface age distribution. The basic quantum is what has been traditionally called the rate of surface renewal. The Higbie surface age distribution follows if the renewal frequency distribution of such elements is assumed to be continuous. Four age distributions, which reflect different start-up conditions of the absorption process, are then used to analyse transient physical gas absorption into a large volume of liquid, assuming negligible gas-side mass-transfer resistance. The first two are different versions of the Danckwerts model, the third one is based on the uniform and Higbie distributions, while the fourth one is a mixed distribution. For the four cases, theoretical expressions are derived for the rates of gas absorption and dissolved-gas transfer to the bulk liquid. Under transient conditions, these two rates are not equal and have an inverse relationship. However, with the progress of absorption towards steady state, they approach one another. Assuming steady-state conditions, the conventional one-parameter Danckwerts age distribution is generalized to a two-parameter age distribution. Like the two-parameter logarithmic normal distribution, this distribution can also capture the bell-shaped nature of the distribution of the ages of surface elements observed experimentally in air–sea gas and heat exchange. Estimates of the liquid-side mass-transfer coefficient made using these two distributions for the absorption of hydrogen and oxygen in water are very close to one another and are comparable to experimental values reported in the literature. PMID:28791137
THE EFFECT OF CHLORINE DEMAND ON INACTIVATION RATE CONSTANT
Ct (disinfectant concentration multiplied by exposure time) values are used by the US EPA to evaluate the efficacy of disinfection of microorganisms under various conditions of drinking water treatment conditions. First-order decay is usually assumed for the degradation of a disi...
Global Aerosol Optical Models and Lookup Tables for the New MODIS Aerosol Retrieval over Land
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Remer, Loraine A.; Dubovik, Oleg
2007-01-01
Since 2000, MODIS has been deriving aerosol properties over land from MODIS observed spectral reflectance, by matching the observed reflectance with that simulated for selected aerosol optical models, aerosol loadings, wavelengths and geometrical conditions (that are contained in a lookup table or 'LUT'). Validation exercises have showed that MODIS tends to under-predict aerosol optical depth (tau) in cases of large tau (tau greater than 1.0), signaling errors in the assumed aerosol optical properties. Using the climatology of almucantur retrievals from the hundreds of global AERONET sunphotometer sites, we found that three spherical-derived models (describing fine-sized dominated aerosol), and one spheroid-derived model (describing coarse-sized dominated aerosol, presumably dust) generally described the range of observed global aerosol properties. The fine dominated models were separated mainly by their single scattering albedo (omega(sub 0)), ranging from non-absorbing aerosol (omega(sub 0) approx. 0.95) in developed urban/industrial regions, to neutrally absorbing aerosol (omega(sub 0) approx.90) in forest fire burning and developing industrial regions, to absorbing aerosol (omega(sub 0) approx. 0.85) in regions of savanna/grassland burning. We determined the dominant model type in each region and season, to create a 1 deg. x 1 deg. grid of assumed aerosol type. We used vector radiative transfer code to create a new LUT, simulating the four aerosol models, in four MODIS channels. Independent AERONET observations of spectral tau agree with the new models, indicating that the new models are suitable for use by the MODIS aerosol retrieval.
MULTI-STRAND CORONAL LOOP MODEL AND FILTER-RATIO ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourouaine, Sofiane; Marsch, Eckart, E-mail: bourouaine@mps.mpg.d
2010-01-10
We model a coronal loop as a bundle of seven separate strands or filaments. Each of the loop strands used in this model can independently be heated (near their left footpoints) by Alfven/ion-cyclotron waves via wave-particle interactions. The Alfven waves are assumed to penetrate the strands from their footpoints, at which we consider different wave energy inputs. As a result, the loop strands can have different heating profiles, and the differential heating can lead to a varying cross-field temperature in the total coronal loop. The simulation of Transition Region and Coronal Explorer (TRACE) observations by means of this loop modelmore » implies two uniform temperatures along the loop length, one inferred from the 171:195 filter ratio and the other from the 171:284 ratio. The reproduced flat temperature profiles are consistent with those inferred from the observed extreme-ultraviolet coronal loops. According to our model, the flat temperature profile is a consequence of the coronal loop consisting of filaments, which have different temperatures but almost similar emission measures in the cross-field direction. Furthermore, when we assume certain errors in the simulated loop emissions (e.g., due to photometric uncertainties in the TRACE filters) and use the triple-filter analysis, our simulated loop conditions become consistent with those of an isothermal plasma. This implies that the use of TRACE or EUV Imaging Telescope triple filters for observation of a warm coronal loop may not help in determining whether the cross-field isothermal assumption is satisfied or not.« less
Ramírez-Benavides, William; Monge-Nájera, Julián; Chavarría, Juan B
2009-09-01
The fig pollinating wasps (Hymenoptera: Agaonidae) have obligate arrhenotoky and a breeding structure that fits local mate competition (LMC). It has been traditionally assumed that LMC organisms adjust the sex ratio by laying a greater proportion of male eggs when there is superparasitism (several foundresses in a host). We tested the assumption with two wasp species, Pegoscapus silvestrii, pollinator of Ficus pertusa and Pegoscapus tonduzi, pollinator of Ficus eximia (= F citrifolia), in the Central Valley of Costa Rica. Total number of wasps and seeds were recorded in individual isolated naturally colonized syconia. There was a constant additive effect between the number of foundresses and the number of males produced in the brood of a syconium, while the number of females decreased. Both wasp species seem to have precise sex ratios and probably lay the male eggs first in the sequence, independently of superparasitism and clutch size: consequently, they have a non-random sex allocation. Each syconium of Ficus pertusa and of F. eximia colonized by one foundress had similar mean numbers of females, males, and seeds. The two species of wasps studied do not seem to adjust the sex ratio when there is superparasitism. Pollinating fig wasp behavior is better explained by those models not assuming that females do mathematical calculations according to other females' sex ratios, size, number of foundresses, genetic constitution, clutch size or environmental conditions inside the syconium. Our results are in agreement with the constant male number hypothesis, not with sex ratio games.
An efficient approach for treating composition-dependent diffusion within organic particles
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...
2017-09-07
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
An efficient approach for treating composition-dependent diffusion within organic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
Probing the metabolic water contribution to intracellular water using oxygen isotope ratios of PO4
NASA Astrophysics Data System (ADS)
Li, Hui; Yu, Chan; Wang, Fei; Chang, Sae Jung; Yao, Jun; Blake, Ruth E.
2016-05-01
Knowledge of the relative contributions of different water sources to intracellular fluids and body water is important for many fields of study, ranging from animal physiology to paleoclimate. The intracellular fluid environment of cells is challenging to study due to the difficulties of accessing and sampling the contents of intact cells. Previous studies of multicelled organisms, mostly mammals, have estimated body water composition—including metabolic water produced as a byproduct of metabolism—based on indirect measurements of fluids averaged over the whole organism (e.g., blood) combined with modeling calculations. In microbial cells and aquatic organisms, metabolic water is not generally considered to be a significant component of intracellular water, due to the assumed unimpeded diffusion of water across cell membranes. Here we show that the 18O/16O ratio of PO4 in intracellular biomolecules (e.g., DNA) directly reflects the O isotopic composition of intracellular water and thus may serve as a probe allowing direct sampling of the intracellular environment. We present two independent lines of evidence showing a significant contribution of metabolic water to the intracellular water of three environmentally diverse strains of bacteria. Our results indicate that ˜30-40% of O in PO4 comprising DNA/biomass in early stationary phase cells is derived from metabolic water, which bolsters previous results and also further suggests a constant metabolic water value for cells grown under similar conditions. These results suggest that previous studies assuming identical isotopic compositions for intracellular/extracellular water may need to be reconsidered.
Small Mercury Relativity Orbiter
NASA Technical Reports Server (NTRS)
Bender, Peter L.; Vincent, Mark A.
1989-01-01
The accuracy of solar system tests of gravitational theory could be very much improved by range and Doppler measurements to a Small Mercury Relativity Orbiter. A nearly circular orbit at roughly 2400 km altitude is assumed in order to minimize problems with orbit determination and thermal radiation from the surface. The spacecraft is spin-stabilized and has a 30 cm diameter de-spun antenna. With K-band and X-band ranging systems using a 50 MHz offset sidetone at K-band, a range accuracy of 3 cm appears to be realistically achievable. The estimated spacecraft mass is 50 kg. A consider-covariance analysis was performed to determine how well the Earth-Mercury distance as a function of time could be determined with such a Relativity Orbiter. The minimum data set is assumed to be 40 independent 8-hour arcs of tracking data at selected times during a two year period. The gravity field of Mercury up through degree and order 10 is solved for, along with the initial conditions for each arc and the Earth-Mercury distance at the center of each arc. The considered parameters include the gravity field parameters of degree 11 and 12 plus the tracking station coordinates, the tropospheric delay, and two parameters in a crude radiation pressure model. The conclusion is that the Earth-Mercury distance can be determined to 6 cm accuracy or better. From a modified worst-case analysis, this would lead to roughly 2 orders of magnitude improvement in the knowledge of the precession of perihelion, the relativistic time delay, and the possible change in the gravitational constant with time.
Model specification in oral health-related quality of life research.
Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan
2009-10-01
The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, A.; Hashimoto, T.; Horibe, M.
In quantum teleportation, neither Alice nor Bob acquires any classical knowledge on teleported states. The teleportation protocol is said to be oblivious to both parties. In remote state preparation (RSP), it is assumed that Alice is given complete classical knowledge on the state that is to be prepared by Bob. Recently, Leung and Shor [e-print quant-ph/0201008] showed that the same amount of classical information as that in teleportation needs to be transmitted in any exact and deterministic RSP protocol that is oblivious to Bob. Assuming that the dimension of subsystems in the prior-entangled state is the same as the dimensionmore » of the input space, we study similar RSP protocols, but not necessarily oblivious to Bob. We show that in this case Bob's quantum operation can be safely assumed to be a unitary transformation. We then derive an equation that is a necessary and sufficient condition for such a protocol to exist. By studying this equation, we show that one-qubit RSP requires two classical bits of communication, which is the same amount as in teleportation, even if the protocol is not assumed oblivious to Bob. For higher dimensions, it is still an open question whether the amount of classical communication can be reduced by abandoning oblivious conditions.« less
NASA Astrophysics Data System (ADS)
Chang, Jenghwa; Aronson, Raphael; Graber, Harry L.; Barbour, Randall L.
1995-05-01
We present results examining the dependence of image quality for imaging in dense scattering media as influenced by the choice of parameters pertaining to the physical measurement and factors influencing the efficiency of the computation. The former includes the density of the weight matrix as affected by the target volume, view angle, and source condition. The latter includes the density of the weight matrix and type of algorithm used. These were examined by solving a one-step linear perturbation equation derived from the transport equation using three different algorithms: POCS, CGD, and SART algorithms with contraints. THe above were explored by evaluating four different 3D cylindrical phantom media: a homogeneous medium, an media containing a single black rod on the axis, a single black rod parallel to the axis, and thirteen black rods arrayed in the shape of an 'X'. Solutions to the forward problem were computed using Monte Carlo methods for an impulse source, from which was calculated time- independent and time harmonic detector responses. The influence of target volume on image quality and computational efficiency was studied by computing solution to three types of reconstructions: 1) 3D reconstruction, which considered each voxel individually, 2) 2D reconstruction, which assumed that symmetry along the cylinder axis was know a proiri, 3) 2D limited reconstruction, which assumed that only those voxels in the plane of the detectors contribute information to the detecot readings. The effect of view angle was explored by comparing computed images obtained from a single source, whose position was varied, as well as for the type of tomographic measurement scheme used (i.e., radial scan versus transaxial scan). The former condition was also examined for the dependence of the above on choice of source condition [ i.e., cw (2D reconstructions) versus time-harmonic (2D limited reconstructions) source]. The efficiency of the computational effort was explored, principally, by conducting a weight matrix 'threshold titration' study. This involved computing the ratio of each matrix element to the maximum element of its row and setting this to zero if the ratio was less than a preselected threshold. Results obtained showed that all three types of reconstructions provided good image quality. The 3D reconstruction outperformed the other two reconstructions. The time required for 2D and 2D limited reconstruction is much less (< 10%) than that for the 3D reconstruction. The 'threshold titration' study shows that artifacts were present when the threshold was 5% or higher, and no significant differences of image quality were observed when the thresholds were less tha 1%, in which case 38% (21,849 of 57,600) of the total weight elements were set to zero. Restricting the view angle produced degradation in image quality, but, in all cases, clearly recognizable images were obtained.
14 CFR 25.481 - Tail-down landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Tail-down landing conditions. 25.481... landing conditions. (a) In the tail-down attitude, the airplane is assumed to contact the ground at... prescribed in § 25.473 with— (1) V L 1 equal to V S 0 (TAS) at the appropriate landing weight and in standard...
40 CFR 53.21 - Test conditions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Test conditions. 53.21 Section 53.21... Methods SO2, CO, O3, and NO2 § 53.21 Test conditions. (a) Set-up and start-up of the test analyzer shall... before beginning the tests. The test procedures assume that the test analyzer has an analog measurement...
40 CFR 53.21 - Test conditions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test conditions. 53.21 Section 53.21... Methods for SO2, CO, O3, and NO2 § 53.21 Test conditions. (a) Set-up and start-up of the test analyzer... before beginning the tests. The test procedures assume that the test analyzer has a conventional analog...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conroy, Aindriú; Mazumdar, Anupam; Koshelev, Alexey S., E-mail: a.conroy@lancaster.ac.uk, E-mail: alexey@ubi.pt, E-mail: a.mazumdar@lancaster.ac.uk
Einstein's General theory of relativity permits spacetime singularities, where null geodesic congruences focus in the presence of matter, which satisfies an appropriate energy condition. In this paper, we provide a minimal defocusing condition for null congruences without assuming any ansatz -dependent background solution. The two important criteria are: (1) an additional scalar degree of freedom, besides the massless graviton must be introduced into the spacetime; and (2) an infinite derivative theory of gravity is required in order to avoid tachyons or ghosts in the graviton propagator. In this regard, our analysis strengthens earlier arguments for constructing non-singular bouncing cosmologies withinmore » an infinite derivative theory of gravity, without assuming any ansatz to solve the full equations of motion.« less
1985-11-26
etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4
ERIC Educational Resources Information Center
Greene, Jack P.
This book explores the history of the Virginia colony from the early 18th century to the time of the signing of the Declaration of Independence. Virginia, the oldest and most prosperous of Great Britain's North American colonies, assumed a leading role in the political life of the colonies. Some in 17th century Virginia had seen political…
A Decision-Based Methodology for Object Oriented-Design
1988-12-16
willing to take the time to meet together weekly for mutual encouragement and prayer . Their friendship, uncompromising standards, and lifestyle were...assume the validity of the object-oriented and software engineering principles involved, and define and proto- type a generic, language independent...mean- ingful labels for variables, abstraction requires the ability to define new types that relieve the programmer from having to know or mess with
Evaluation of Improved Engine Compartment Overheat Detection Techniques.
1986-08-01
radiation properties (emissivity and reflectivity) of the surface. The first task of the numerical procedure is to investigate the radiosity (radiative heat...and radiosity are spatially uniform within each zone. 0 Radiative properties are spatially uniform and independent of direction. 0 The enclosure is...variation in the radiosity will be nonuniform in distribution in that region. The zone analysis method assumes the : . ,. temperature and radiation
Human Supervision of Time Critical Control Systems. Addendum
2010-02-26
signals such as electroencephalogram (EEG) and electrooculography ( EOG ). Current research has demonstrated these signals ’ ability to respond to changing...relationships often present in EEG/ EOG data; they routinely achieve classification accuracy greater than 80%. However, the discrete output of these...present data there were seven EEG and EOG signals recorded, thus, ICA assumes each were a mixture of seven independent components (Stone, 2002). Some
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Spin-Polarized Tunneling at Interfaces Between Oxides and Metals or Semiconductors
2006-09-01
solution 3 3. Several miscellaneous compounds , including molecular oxygen and organic biradicals 4. Metals When a variable magnetic field is...substrate layer) Heusler alloys are considered to be prime candidates, because they show great potential for spin-injection contacts to compound and...usually employ simple parabolic bands and/or momentum and energy independent tunneling matrix elements. The classical theory of tunneling assumes that the
ERIC Educational Resources Information Center
Kaycheng, Soh
2015-01-01
World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…
Water impact analysis of space shuttle solid rocket motor by the finite element method
NASA Technical Reports Server (NTRS)
Buyukozturk, O.; Hibbitt, H. D.; Sorensen, E. P.
1974-01-01
Preliminary analysis showed that the doubly curved triangular shell elements were too stiff for these shell structures. The doubly curved quadrilateral shell elements were found to give much improved results. A total of six load cases were analyzed in this study. The load cases were either those resulting from a static test using reaction straps to simulate the drop conditions or under assumed hydrodynamic conditions resulting from a drop test. The latter hydrodynamic conditions were obtained through an emperical fit of available data. Results obtained from a linear analysis were found to be consistent with results obtained elsewhere with NASTRAN and BOSOR. The nonlinear analysis showed that the originally assumed loads would result in failure of the shell structures. The nonlinear analysis also showed that it was useful to apply internal pressure as a stabilizing influence on collapse. A final analysis with an updated estimate of load conditions resulted in linear behavior up to full load.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Burton, W. S.
1992-01-01
Analytic three-dimensional elasticity solutions are developed for the free vibration and buckling of thermally stressed rectangular multilayered angle-ply anisotropic plates which are assumed to have an antisymmetric lamination with respect to the middle plane. Sensitivity derivatives are evaluated and used to investigate the sensitivity of the vibration and buckling responses to variations in the different lamination and material parameters of the plate. A Duhamel-Neumann-type constitutive model is used, and the material properties are assumed to be independent of temperature. Numerical results are presented, showing the effects of variations in the material characteristics and fiber orientation of different layers, as well as the effect of initial thermal deformation on the vibrational and buckling responses of the plate.
Beating the Spin-down Limit on Gravitational Wave Emission from the Vela Pulsar
NASA Astrophysics Data System (ADS)
Abadie, J.; Abbott, B. P.; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; Allen, B.; Allen, G. S.; Amador Ceron, E.; Amariutei, D.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Arai, K.; Arain, M. A.; Araya, M. C.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barnum, S.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Bauchrowitz, J.; Bauer, Th. S.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Belletoile, A.; Belopolski, I.; Benacquista, M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birindelli, S.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Boyle, M.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brummit, A.; Budzyński, R.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cain, J.; Calloni, E.; Camp, J. B.; Campagna, E.; Campsie, P.; Cannizzo, J.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chaibi, O.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Clara, F.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Coward, D. M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Das, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; del Prete, M.; Dent, T.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Dorsher, S.; Douglas, E. S. D.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Engel, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flanigan, M.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garofoli, J. A.; Garufi, F.; Gáspár, M. E.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Hayler, T.; Heefner, J.; Heitmann, H.; Hello, P.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hong, T.; Hooper, S.; Hosken, D. J.; Hough, J.; Howell, E. J.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J. B.; Katsavounidis, E.; Katzman, W.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Kelner, M.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, H.; Kim, N.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, R.; Kwee, P.; Landry, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Leong, J.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Liguori, N.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Luan, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marandi, A.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McKechan, D. J. A.; Meadors, G.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Moesta, P.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishida, E.; Nishizawa, A.; Nocera, F.; Nolting, D.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Parameswaran, A.; Pardi, S.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pathak, D.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Podkaminer, J.; Poggiani, R.; Pöld, J.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C. R.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Redwine, K.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Sakosky, M.; Salemi, F.; Salit, M.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saraf, S.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schlamminger, S.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shihan Weerathunga, T.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Smith, R.; Somiya, K.; Sorazu, B.; Soto, J.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Stein, A. J.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szokoly, G. P.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Tseng, K.; Turner, L.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vaishnav, B.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yu, P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Buchner, S.; Hotan, A.; Palfreyman, J.
2011-08-01
We present direct upper limits on continuous gravitational wave emission from the Vela pulsar using data from the Virgo detector's second science run. These upper limits have been obtained using three independent methods that assume the gravitational wave emission follows the radio timing. Two of the methods produce frequentist upper limits for an assumed known orientation of the star's spin axis and value of the wave polarization angle of, respectively, 1.9 × 10-24 and 2.2 × 10-24, with 95% confidence. The third method, under the same hypothesis, produces a Bayesian upper limit of 2.1 × 10-24, with 95% degree of belief. These limits are below the indirect spin-down limit of 3.3 × 10-24 for the Vela pulsar, defined by the energy loss rate inferred from observed decrease in Vela's spin frequency, and correspond to a limit on the star ellipticity of ~10-3. Slightly less stringent results, but still well below the spin-down limit, are obtained assuming the star's spin axis inclination and the wave polarization angles are unknown.
Feasibility of a medium-size central cogenerated energy facility, energy management memorandum
NASA Astrophysics Data System (ADS)
Porter, R. W.
1982-09-01
The thermal-economic feasibility was studied of a medium-size central cogenerated energy facility designed to serve five varied industries. Generation options included one dual-fuel diesel and one gas turbine, both with waste heat boilers, and five fired boilers. Fuels included natural gas, and for the fired-boiler cases, also low-sulphur coal and municipal refuse. The fired-boiler cogeneration systems employed back-pressure steam turbines. For coal and refuse, the option of steam only without cogeneration was also assessed. The refuse-fired cases utilized modular incinerators. The options provided for a wide range of steam and electrical capacities. Deficient steam was assumed generated independently in existing equipment. Excess electrical power over that which could be displaced was assumed sold to Commonwealth Edison Company under PURPA (Public Utility Regulator Policies Act). The facility was assumed operated by a mutually owned corporation formed by the cogenerated power users. The economic analysis was predicted on currently applicable energy-investment tax credits and accelerated depreciation for a January 1985 startup date. Based on 100% equity financing, the results indicated that the best alternative was the modular-incinerator cogeneration system.
Across the health-social care divide: elderly people as active users of health care and social care.
Roberts, K
2001-03-01
Several ways in which elderly people may assume an active role when using welfare services are discussed here. Selected findings are presented from a study that explored the experience and behaviour of elderly people on discharge from inpatient care with regard to criteria indicating user influence or control (namely participation, representation, access, choice, information and redress). Data were collected via semistructured interviews with service users (n = 30) soon after their return home from hospital. A number of differences were revealed between health care and social care in relation to users being provided with opportunities to assume an active role and in being willing and able to assume an active role. These differences were manifest in elderly service users accessing services, seeking information, exercising choice and acting independently of service providers. It appeared paradoxical that contact points were more easily defined with regard to health care yet users were more likely to exercise choice and act independently in securing social care. It is suggested that social care needs and appropriate service delivery are more easily recognised than making the link between perceived health care needs and appropriate services. In addition, it appeared that informal and private providers are more widely available and accessible for social care. If comprehensive continuing care is to be provided, incorporating both health and social care elements, greater uniformity appears to be required across the welfare sector. Lessons for social care provision from the delivery of health care suggest the clear definition of contact points to facilitate service use. Making health care more accessible, however, does not appear to be easily attainable due to the monopoly provision of health care and the lack of direct purchasing power by potential users.
NASA Astrophysics Data System (ADS)
Mukhartova, Juliya; Levashova, Natalia; Volkova, Elena; Olchev, Alexander
2016-04-01
The possible effect of spatial heterogeneity of vegetation cover and relief on horizontal and vertical turbulent exchange of CO2 was described using a process-based two-dimensional (2D) turbulent exchange models (Mukhartova et al. 2015). As a key area for this modeling study the hilly territory situated at the boundary between broadleaf forest and steppe zones in European part of Russia (Tula region) was selected. The vegetation cover in the study region is represented by complex mosaic of crop areas, grasslands, pastures, mires and groves. The very heterogeneous vegetation cover and complex dissected relief make very difficult an adequate determining the local and regional CO2 fluxes using experimental methods only. The two-dimensional model based on solution of the Navier-Stokes and continuity equations using well-known one-and-a-half order (TKE) closure scheme is applied. For description of the plant canopy photosynthesis and respiration rates the model uses an aggregated approach based on the model of Ball et al (1987) in Leuning modification (1990, 1995), the Beer-Lambert equation for the description of solar radiation penetration within a plant canopy (Monsi, Saeki 1953), and also an algorithm describing the response of stomatal conductance of the leaves to incoming photosynthetically active radiation. All necessary input parameters describing the photosynthesis and respiration properties of different plants and soil types in the study region were measured in the field or taken from the literature. The system of differential equations in the model is numerically solved by the finite-difference method. It is assumed that the influence of ground surface heterogeneities at the upper boundary of computing domain is very low and the pressure excess can be therefore considered as zero. The concentration of CO2 at the upper boundary of computing domain is assumed to be equal to some background value. It is also assumed that all boundaries between different vegetation and land-use types are situated far enough from the domain boundaries. It enabled us to assume that near these boundaries the values of vertical and horizontal wind components are independent on x coordinate. To quantify the possible effects of relief and vegetation heterogeneity on CO2 fluxes the three transects crossing the study area were chosen. For each transect the 2D patterns of wind speed components, turbulent exchange coefficients, CO2 concentrations and fluxes were calculated. The modeled vertical CO2 fluxes were compared with the fluxes calculated without allowing for turbulent disturbances due to relief and vegetation heterogeneity. All modeling experiments were provided for different weather conditions. The results of modeling experiments for different transects under various meteorological conditions showed that relief and vegetation heterogeneity have a significant impact on CO2 fluxes within the atmospheric surface layer and their ignoring can results in uncertainties in flux estimations. This study was supported by the Russian Science Foundation (Grant 14-14-00956).
Device-independent parallel self-testing of two singlets
NASA Astrophysics Data System (ADS)
Wu, Xingyao; Bancal, Jean-Daniel; McKague, Matthew; Scarani, Valerio
2016-06-01
Device-independent self-testing offers the possibility of certifying the quantum state and measurements, up to local isometries, using only the statistics observed by querying uncharacterized local devices. In this paper we study parallel self-testing of two maximally entangled pairs of qubits; in particular, the local tensor product structure is not assumed but derived. We prove two criteria that achieve the desired result: a double use of the Clauser-Horne-Shimony-Holt inequality and the 3 ×3 magic square game. This demonstrate that the magic square game can only be perfectly won by measuring a two-singlet state. The tolerance to noise is well within reach of state-of-the-art experiments.
Ambiguities in model-independent partial-wave analysis
NASA Astrophysics Data System (ADS)
Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.
2018-06-01
Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.
Nonlinear Cross-Bridge Elasticity and Post-Power-Stroke Events in Fast Skeletal Muscle Actomyosin
Persson, Malin; Bengtsson, Elina; ten Siethoff, Lasse; Månsson, Alf
2013-01-01
Generation of force and movement by actomyosin cross-bridges is the molecular basis of muscle contraction, but generally accepted ideas about cross-bridge properties have recently been questioned. Of the utmost significance, evidence for nonlinear cross-bridge elasticity has been presented. We here investigate how this and other newly discovered or postulated phenomena would modify cross-bridge operation, with focus on post-power-stroke events. First, as an experimental basis, we present evidence for a hyperbolic [MgATP]-velocity relationship of heavy-meromyosin-propelled actin filaments in the in vitro motility assay using fast rabbit skeletal muscle myosin (28–29°C). As the hyperbolic [MgATP]-velocity relationship was not consistent with interhead cooperativity, we developed a cross-bridge model with independent myosin heads and strain-dependent interstate transition rates. The model, implemented with inclusion of MgATP-independent detachment from the rigor state, as suggested by previous single-molecule mechanics experiments, accounts well for the [MgATP]-velocity relationship if nonlinear cross-bridge elasticity is assumed, but not if linear cross-bridge elasticity is assumed. In addition, a better fit is obtained with load-independent than with load-dependent MgATP-induced detachment rate. We discuss our results in relation to previous data showing a nonhyperbolic [MgATP]-velocity relationship when actin filaments are propelled by myosin subfragment 1 or full-length myosin. We also consider the implications of our results for characterization of the cross-bridge elasticity in the filament lattice of muscle. PMID:24138863
Characterizability of metabolic pathway systems from time series data.
Voit, Eberhard O
2013-12-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. Copyright © 2013 Elsevier Inc. All rights reserved.
Characterizability of Metabolic Pathway Systems from Time Series Data
Voit, Eberhard O.
2013-01-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489
Lopeztegui Castillo, Alexander; Capetillo Piñar, Norberto; Betanzos Vega, Abel
2012-03-01
Nutritional condition can affect survival and growth rate of crustaceans, and this is mostly affected by habitat conditions. This study describes the space-temporary nutritional changes in this commercially important species. With this aim, the variations in the nutritional condition (K) of lobsters from four zones (1, 2, 4 and 5) in the Gulf of Batabanó, Cuba, were determined. For this, the weight/length ratio (K=Pt/Lt) was calculated using animals captured in 1981 and 2010. The nutritional condition between areas and sexes, and years and sexes, was contrasted by a bifactorial ANOVA, and the overall length and weight of lobsters were compared using a t-Test for independent samples and unifactorial ANOVA. It was found that the nutritional condition was significantly greater in males than in females. In addition, significant variations between zones were detected for both years. Nutritional condition of lobsters from Zone five was the highest for 1981, while it was Zone two for 2010. Lobsters nutritional state showed significant variations between years, being greater in 1981 (2.34 +/- 0.84g/mm) than in 2010 (1.96 +/- 0.49g/mm). The inter-zones variations as well as the inter-annual ones seem to be related to the reported variations of the bottom type and the vegetation cover. Seasonal variations in the abundance and distribution of benthic organisms, that constitute food for lobsters, could also be influencing. The differences between sexes, however, were assumed as a consequence of the methodology used and the sexual dimorphism of the species. Due to other K estimation methods, that do not include morphometric measurements, these differences were not detected. We suggested that the P. argus nutritional condition is a good estimator of the habitat condition. Besides, according to the applied K estimation methodology, it was found that different groups of lobsters that have resemblant nutritional condition, did not necessarily observe similarities in the overall mean length or weight, so they could exist under different habitat conditions.
Contextual cueing effects despite spatially cued target locations.
Schankin, Andrea; Schubö, Anna
2010-07-01
Reaction times (RT) to targets are faster in repeated displays relative to novel ones when the spatial arrangement of the distracting items predicts the target location (contextual cueing). It is assumed that visual-spatial attention is guided more efficiently to the target resulting in reduced RTs. In the present experiment, contextual cueing even occurred when the target location was previously peripherally cued. Electrophysiologically, repeated displays elicited an enhanced N2pc component in both conditions and resulted in an earlier onset of the stimulus-locked lateralized readiness potential (s-LRP) in the cued condition and in an enhanced P3 in the uncued condition relative to novel displays. These results indicate that attentional guidance is less important than previously assumed but that other cognitive processes, such as attentional selection (N2pc) and response-related processes (s-LRP, P3) are facilitated by context familiarity.
Deformation Response of Unsymmetrically Laminated Plates Subjected to Inplane Loading
NASA Technical Reports Server (NTRS)
Ochinero, Tomoya T.; Hyer, Michael W.
2002-01-01
This paper discusses the out-of-plane deformation behavior of unsymmetric cross-ply composite plates compressed inplane by displacing one edge of the plate a known amount. The plates are assumed to be initially flat and several boundary conditions are considered. Geometrically nonlinear behavior is assumed. The primary objectives are to study the out-of-plane behavior as a function of increasing inplane compression and to determine if bifurcation behavior and secondary buckling can occur. It is shown that, depending on the boundary conditions, both can occur, though the characteristics are different than the pre and post-buckling behavior of a companion symmetric cross-ply plate. Furthermore, while a symmetric cross-ply plate can postbuckle with either a positive or negative out-of-plane displacement, the unsymmetric cross-ply plates studied deflect out-of-plane only in one direction throughout the range of inplane compression, the direction again depending on the boundary conditions
Transport across nanogaps using self-consistent boundary conditions
NASA Astrophysics Data System (ADS)
Biswas, D.; Kumar, R.
2012-06-01
Charge particle transport across nanogaps is studied theoretically within the Schrodinger-Poisson mean field framework. The determination of self-consistent boundary conditions across the gap forms the central theme in order to allow for realistic interface potentials (such as metal-vacuum) which are smooth at the boundary and do not abruptly assume a constant value at the interface. It is shown that a semiclassical expansion of the transmitted wavefunction leads to approximate but self consistent boundary conditions without assuming any specific form of the potential beyond the gap. Neglecting the exchange and correlation potentials, the quantum Child-Langmuir law is investigated. It is shown that at zero injection energy, the quantum limiting current density (Jc) is found to obey the local scaling law Jc ~ Vgα/D5-2α with the gap separation D and voltage Vg. The exponent α > 1.1 with α → 3/2 in the classical regime of small de Broglie wavelengths.
14 CFR 25.351 - Yaw maneuver conditions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Yaw maneuver conditions. 25.351 Section 25.351 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... airplane inertia forces. In computing the tail loads the yawing velocity may be assumed to be zero. (a...
14 CFR 25.351 - Yaw maneuver conditions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Yaw maneuver conditions. 25.351 Section 25.351 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... airplane inertia forces. In computing the tail loads the yawing velocity may be assumed to be zero. (a...
Emergence of ion channel modal gating from independent subunit kinetics.
Bicknell, Brendan A; Goodhill, Geoffrey J
2016-09-06
Many ion channels exhibit a slow stochastic switching between distinct modes of gating activity. This feature of channel behavior has pronounced implications for the dynamics of ionic currents and the signaling pathways that they regulate. A canonical example is the inositol 1,4,5-trisphosphate receptor (IP3R) channel, whose regulation of intracellular Ca(2+) concentration is essential for numerous cellular processes. However, the underlying biophysical mechanisms that give rise to modal gating in this and most other channels remain unknown. Although ion channels are composed of protein subunits, previous mathematical models of modal gating are coarse grained at the level of whole-channel states, limiting further dialogue between theory and experiment. Here we propose an origin for modal gating, by modeling the kinetics of ligand binding and conformational change in the IP3R at the subunit level. We find good agreement with experimental data over a wide range of ligand concentrations, accounting for equilibrium channel properties, transient responses to changing ligand conditions, and modal gating statistics. We show how this can be understood within a simple analytical framework and confirm our results with stochastic simulations. The model assumes that channel subunits are independent, demonstrating that cooperative binding or concerted conformational changes are not required for modal gating. Moreover, the model embodies a generally applicable principle: If a timescale separation exists in the kinetics of individual subunits, then modal gating can arise as an emergent property of channel behavior.
Model independent constraints on transition redshift
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.
2018-05-01
This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.
Should metacognition be measured by logistic regression?
Rausch, Manuel; Zehetleitner, Michael
2017-03-01
Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.
A nonlocal and periodic reaction-diffusion-advection model of a single phytoplankton species.
Peng, Rui; Zhao, Xiao-Qiang
2016-02-01
In this article, we are concerned with a nonlocal reaction-diffusion-advection model which describes the evolution of a single phytoplankton species in a eutrophic vertical water column where the species relies solely on light for its metabolism. The new feature of our modeling equation lies in that the incident light intensity and the death rate are assumed to be time periodic with a common period. We first establish a threshold type result on the global dynamics of this model in terms of the basic reproduction number R0. Then we derive various characterizations of R0 with respect to the vertical turbulent diffusion rate, the sinking or buoyant rate and the water column depth, respectively, which in turn give rather precise conditions to determine whether the phytoplankton persist or become extinct. Our theoretical results not only extend the existing ones for the time-independent case, but also reveal new interesting effects of the modeling parameters and the time-periodic heterogeneous environment on persistence and extinction of the phytoplankton species, and thereby suggest important implications for phytoplankton growth control.
The measurable heat flux that accompanies active transport by Ca2+-ATPase.
Bedeaux, Dick; Kjelstrup, Signe
2008-12-28
We present a new mesoscopic basis which can be used to derive flux equations for the forward and reverse mode of operation of ion-pumps. We obtain a description of the fluxes far from global equilibrium. An asymmetric set of transport coefficients is obtained, by assuming that the chemical reaction as well as the ion transports are activated, and that the enzyme has a temperature independent of the activation coordinates. Close to global equilibrium, the description reduces to the well known one from non-equilibrium thermodynamics with a symmetric set of transport coefficients. We show how the measurable heat flux and the heat production under isothermal conditions, as well as thermogenesis, can be defined. Thermogenesis is defined via the onset of the chemical reaction or ion transports by a temperature drop. A prescription has been given for how to determine transport coefficients on the mesocopic level, using the macroscopic coefficient obtained from measurements, the activation enthalpy, and a proper probability distribution. The method may give new impetus to a long-standing unsolved transport problem in biophysics.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Economic viability of thin-film tandem solar modules in the United States
NASA Astrophysics Data System (ADS)
Sofia, Sarah E.; Mailoa, Jonathan P.; Weiss, Dirk N.; Stanbery, Billy J.; Buonassisi, Tonio; Peters, I. Marius
2018-05-01
Tandem solar cells are more efficient but more expensive per unit area than established single-junction (SJ) solar cells. To understand when specific tandem architectures should be utilized, we evaluate the cost-effectiveness of different II-VI-based thin-film tandem solar cells and compare them to the SJ subcells. Levelized cost of electricity (LCOE) and energy yield are calculated for four technologies: industrial cadmium telluride and copper indium gallium selenide, and their hypothetical two-terminal (series-connected subcells) and four-terminal (electrically independent subcells) tandems, assuming record SJ quality subcells. Different climatic conditions and scales (residential and utility scale) are considered. We show that, for US residential systems with current balance-of-system costs, the four-terminal tandem has the lowest LCOE because of its superior energy yield, even though it has the highest US per watt (US W-1) module cost. For utility-scale systems, the lowest LCOE architecture is the cadmium telluride single junction, the lowest US W-1 module. The two-terminal tandem requires decreased subcell absorber costs to reach competitiveness over the four-terminal one.
Molar Macrowear Reveals Neanderthal Eco-Geographic Dietary Variation
Fiorenza, Luca; Benazzi, Stefano; Tausch, Jeremy; Kullmer, Ottmar; Bromage, Timothy G.; Schrenk, Friedemann
2011-01-01
Neanderthal diets are reported to be based mainly on the consumption of large and medium sized herbivores, while the exploitation of other food types including plants has also been demonstrated. Though some studies conclude that early Homo sapiens were active hunters, the analyses of faunal assemblages, stone tool technologies and stable isotopic studies indicate that they exploited broader dietary resources than Neanderthals. Whereas previous studies assume taxon-specific dietary specializations, we suggest here that the diet of both Neanderthals and early Homo sapiens is determined by ecological conditions. We analyzed molar wear patterns using occlusal fingerprint analysis derived from optical 3D topometry. Molar macrowear accumulates during the lifespan of an individual and thus reflects diet over long periods. Neanderthal and early Homo sapiens maxillary molar macrowear indicates strong eco-geographic dietary variation independent of taxonomic affinities. Based on comparisons with modern hunter-gatherer populations with known diets, Neanderthals as well as early Homo sapiens show high dietary variability in Mediterranean evergreen habitats but a more restricted diet in upper latitude steppe/coniferous forest environments, suggesting a significant consumption of high protein meat resources. PMID:21445243
Selection of sporophytic and gametophytic self-incompatibility in the absence of a superlocus.
Schoen, Daniel J; Roda, Megan J
2016-06-01
Self-incompatibility (SI) is a complex trait that enforces outcrossing in plant populations. SI generally involves tight linkage of genes coding for the proteins that underlie self-pollen detection and pollen identity specification. Here, we develop two-locus genetic models to address the question of whether sporophytic SI (SSI) and gametophytic SI (GSI) can invade populations of self-compatible plants when there is no linkage or weak linkage of the underlying pollen detection and identity genes (i.e., no S-locus supergene). The models assume that SI evolves as a result of exaptation of genes formerly involved in functions other than SI. Model analysis reveals that SSI and GSI can invade populations even when the underlying genes are loosely linked, provided that inbreeding depression and selfing rate are sufficiently high. Reducing recombination between these genes makes conditions for invasion more lenient. These results can help account for multiple, independent evolution of SI systems as seems to have occurred in the angiosperms. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
The remains of the day in dissociative amnesia.
Staniloiu, Angelica; Markowitsch, Hans J
2012-04-10
Memory is not a unity, but is divided along a content axis and a time axis, respectively. Along the content dimension, five long-term memory systems are described, according to their hierarchical ontogenetic and phylogenetic organization. These memory systems are assumed to be accompanied by different levels of consciousness. While encoding is based on a hierarchical arrangement of memory systems from procedural to episodic-autobiographical memory, retrieval allows independence in the sense that no matter how information is encoded, it can be retrieved in any memory system. Thus, we illustrate the relations between various long-term memory systems by reviewing the spectrum of abnormalities in mnemonic processing that may arise in the dissociative amnesia-a condition that is usually characterized by a retrieval blockade of episodic-autobiographical memories and occurs in the context of psychological trauma, without evidence of brain damage on conventional structural imaging. Furthermore, we comment on the functions of implicit memories in guiding and even adaptively molding the behavior of patients with dissociative amnesia and preserving, in the absence of autonoetic consciousness, the so-called "internal coherence of life".
Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response
NASA Astrophysics Data System (ADS)
Floerchinger, Stefan; Wiedemann, Urs Achim
2014-08-01
An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.
Aluminum nanoparticles burning - still a puzzle?
NASA Astrophysics Data System (ADS)
Gromov, A. A.; Popenko, E. M.
2009-09-01
The experimental data on the aluminum nanopowders (nAl) combustion in oxidizing media (air, propellants AP
Fan, Ruiping
1997-01-01
Most contemporary bioethicists believe that Western bioethical principles, such as the principle of autonomy, are universally binding wherever bioethics is found. According to these bioethicists, these principles may be subject to culturally-conditioned further interpretations for their application in different nations or regions, but an 'abstract content' of each principle remains unchanged, which provides 'an objective basis for moral judgment and international law'. This essay intends to demonstrate that this is not the case. Taking the principle of autonomy as an example, this essay argues that there is no such shared 'abstract content' between the Western bioethical principle of autonomy and the East Asian bioethical principle of autonomy. Other things being equal, the Western principle of autonomy demands self-determination, assumes a subjective conception of the good and promotes the value of individual independence, whilst the East Asian principle of autonomy requires family-determination, presupposes an objective conception of the good and upholds the value of harmonious dependence. They differ from each other in the most general sense and basic moral requirement.
A theory of rotating stall of multistage axial compressors
NASA Technical Reports Server (NTRS)
Moore, F. K.
1983-01-01
A theoretical analysis was made of rotating stall in axial compressors of many stages, finding conditions for a permanent, straight-through traveling disturbance, with the steady compressor characteristic assumed known, and with simple lag processes ascribed to the flows in the inlet, blade passages, and exit regions. For weak disturbances, predicted stall propagation speeds agree well with experimental results. For a locally-parabolic compressor characteristic, an exact nonlinear solution is found and discussed. For deep stall, the stall-zone boundary is most abrupt at the trailing edge, as expected. When a complete characteristic having unstalling and reverse-flow features is adopted, limit cycles governed by a Lienard's equation are found. Analysis of these cycles yields predictions of recovery from rotating stall; a relaxation oscillation is found at some limiting flow coefficient, above which no solution exists. Recovery is apparently independent of lag processes in the blade passages, but instead depends on the lags originating in the inlet and exit flows, and also on the shape of the given characteristic diagram. Small external lags and tall diagrams favor early recovery. Implications for future research are discussed.
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Cooper, D. E.; Cohen, D.
1985-01-01
The effects of a uniform temperature change on the stresses and deformations of composite tubes are investigated. The accuracy of an approximate solution based on the principle of complementary virtual work is determined. Interest centers on tube response away from the ends and so a planar elasticity approach is used. For the approximate solution a piecewise linear variation of stresses with the radial coordinate is assumed. The results from the approximate solution are compared with the elasticity solution. The stress predictions agree well, particularly peak interlaminar stresses. Surprisingly, the axial deformations also agree well. This, despite the fact that the deformations predicted by the approximate solution do not satisfy the interface displacement continuity conditions required by the elasticity solution. The study shows that the axial thermal expansion coefficient of tubes with a specific number of axial and circumferential layers depends on the stacking sequence. This is in contrast to classical lamination theory which predicts the expansion to be independent of the stacking arrangement. As expected, the sign and magnitude of the peak interlaminar stresses depends on stacking sequence.
Short-term memory for pictures seen once or twice.
Martini, Paolo; Maljkovic, Vera
2009-06-01
The present study is concerned with the effects of exposure time, repetition, spacing and lag on old/new recognition memory for generic visual scenes presented in a RSVP paradigm. Early memory studies with verbal material found that knowledge of total exposure time at study is sufficient to accurately predict memory performance at test (the Total Time Hypothesis), irrespective of number of repetitions, spacing or lag. However, other studies have disputed such simple dependence of memory strength on total study time, demonstrating super-additive facilitatory effects of spacing and lag, as well as inhibitory effects, such as the Ranschburg effect, Repetition Blindness and the Attentional Blink. In the experimental conditions of the present study we find no evidence of either facilitatory or inhibitory effects: recognition memory for pictures in RSVP supports the Total Time Hypothesis. The data are consistent with an Unequal-Variance Signal Detection Theory model of memory that assumes the average strength and the variance of the familiarity of pictures both increase with total study time. The main conclusion is that the growth of visual scene familiarity with temporal exposure and repetition is a stochastically independent process.
von Kármán–Howarth and Corrsin equations closure based on Lagrangian description of the fluid motion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divitiis, Nicola de, E-mail: n.dedivitiis@gmail.com
A new approach to obtain the closure formulas for the von Kármán–Howarth and Corrsin equations is presented, which is based on the Lagrangian representation of the fluid motion, and on the Liouville theorem associated to the kinematics of a pair of fluid particles. This kinematics is characterized by the finite scale separation vector which is assumed to be statistically independent from the velocity field. Such assumption is justified by the hypothesis of fully developed turbulence and by the property that this vector varies much more rapidly than the velocity field. This formulation leads to the closure formulas of von Kármán–Howarthmore » and Corrsin equations in terms of longitudinal velocity and temperature correlations following a demonstration completely different with respect to the previous works. Some of the properties and the limitations of the closed equations are discussed. In particular, we show that the times of evolution of the developed kinetic energy and temperature spectra are finite quantities which depend on the initial conditions.« less
The Remains of the Day in Dissociative Amnesia
Staniloiu, Angelica; Markowitsch, Hans J.
2012-01-01
Memory is not a unity, but is divided along a content axis and a time axis, respectively. Along the content dimension, five long-term memory systems are described, according to their hierarchical ontogenetic and phylogenetic organization. These memory systems are assumed to be accompanied by different levels of consciousness. While encoding is based on a hierarchical arrangement of memory systems from procedural to episodic-autobiographical memory, retrieval allows independence in the sense that no matter how information is encoded, it can be retrieved in any memory system. Thus, we illustrate the relations between various long-term memory systems by reviewing the spectrum of abnormalities in mnemonic processing that may arise in the dissociative amnesia—a condition that is usually characterized by a retrieval blockade of episodic-autobiographical memories and occurs in the context of psychological trauma, without evidence of brain damage on conventional structural imaging. Furthermore, we comment on the functions of implicit memories in guiding and even adaptively molding the behavior of patients with dissociative amnesia and preserving, in the absence of autonoetic consciousness, the so-called “internal coherence of life”. PMID:24962768
Cooperation induced by random sequential exclusion
NASA Astrophysics Data System (ADS)
Li, Kun; Cong, Rui; Wang, Long
2016-06-01
Social exclusion is a common and powerful tool to penalize deviators in human societies, and thus to effectively elevate collaborative efforts. Current models on the evolution of exclusion behaviors mostly assume that each peer excluder independently makes the decision to expel the defectors, but has no idea what others in the group would do or how the actual punishment effect will be. Thus, a more realistic model, random sequential exclusion, is proposed. In this mechanism, each excluder has to pay an extra scheduling cost and then all the excluders are arranged in a random order to implement the exclusion actions. If one free rider has already been excluded by an excluder, the remaining excluders will not participate in expelling this defector. We find that this mechanism can help stabilize cooperation under more unfavorable conditions than the normal peer exclusion can do, either in well-mixed population or on social networks. However, too large a scheduling cost may undermine the advantage of this mechanism. Our work validates the fact that collaborative practice among punishers plays an important role in further boosting cooperation.
Do we spontaneously form stable trustworthiness impressions from facial appearance?
Klapper, André; Dotsch, Ron; van Rooij, Iris; Wigboldus, Daniël H J
2016-11-01
It is widely assumed among psychologists that people spontaneously form trustworthiness impressions of newly encountered people from their facial appearance. However, most existing studies directly or indirectly induced an impression formation goal, which means that the existing empirical support for spontaneous facial trustworthiness impressions remains insufficient. In particular, it remains an open question whether trustworthiness from facial appearance is encoded in memory. Using the 'who said what' paradigm, we indirectly measured to what extent people encoded the trustworthiness of observed faces. The results of 4 studies demonstrated a reliable tendency toward trustworthiness encoding. This was shown under conditions of varying context-relevance, and salience of trustworthiness. Moreover, evidence for this tendency was obtained using both (experimentally controlled) artificial and (naturalistic varying) real faces. Taken together, these results suggest that there is a spontaneous tendency to form relatively stable trustworthiness impressions from facial appearance, which is relatively independent of the context. As such, our results further underline how widespread influences of facial trustworthiness may be in our everyday life. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Conditional uncertainty principle
NASA Astrophysics Data System (ADS)
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
2014-12-26
additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed
NASA Technical Reports Server (NTRS)
Bhansali, Vineer
1993-01-01
Assuming trivial action of Euclidean translations, the method of induced representations is used to derive a correspondence between massless field representations transforming under the full generalized even dimensional Lorentz group, and highest weight states of the relevant little group. This gives a connection between 'helicity' and 'chirality' in all dimensions. Restrictions on 'gauge independent' representations for physical particles that this induction imposes are also stated.
Dynamic Fracture in Brittle Materials
2006-02-01
Stress analysis in oxidation problems usually follows the approach of introducing a known eigenstrain in the constitutive equation for elastic stress...deformation behavior in the oxide. The eigenstrain is assumed to be independent of time and position; it is the strain that would be observed in an...imaginary stress-free phase transformation. The total strain of the oxide is the sum of elastic strain and this eigenstrain . As shown in [13], the principal
ERIC Educational Resources Information Center
Bakker, Steven
2012-01-01
A particular trait of the educational system under socialist reign was accountability at the input side--appropriate facilities, centrally decided curriculum, approved text-books, and uniformly trained teachers--but no control on the output. It was simply assumed that it met the agreed standards, which was, in turn, proven by the statistics…
Quantized expected returns in terms of dividend yield at the money
NASA Astrophysics Data System (ADS)
Dieng, Lamine
2011-03-01
We use the Bachelier (additive model) and the Black-Scholes (multiplicative model) as our models for the stock price movement for an investor who has entered into an America call option contract. We assume the investor to pay certain dividend yield on the expected rate of returns from buying stocks. In this work, we also assume the stock price to be initially in the out of the money state and eventually will move up through at the money state to the deep in the money state where the expected future payoffs and returns are positive for the stock holder. We call a singularity point at the money because the expected payoff vanishes at this point. Then, using martingale, supermartingale and Markov theories we obtain the Bachelier-type of the Black-Scholes and the Black-Scholes equations which we hedge in the limit where the change of the expected payoff of the call option is extremely small. Hence, by comparison we obtain the time-independent Schroedinger equation in Quantum Mechanics. We solve completely the time independent Schroedinger equation for both models to obtain the expected rate of returns and the expected payoffs for the stock holder at the money. We find the expected rate of returns to be quantized in terms of the dividend yield.
Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.
Maier, Martin E; Steinhauser, Marco
2017-10-01
Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.
de la Colina, M A; Pompilio, L; Hauber, M E; Reboreda, J C; Mahler, B
2018-03-01
Obligate avian brood parasites lay their eggs in nests of other host species, which assume all the costs of parental care for the foreign eggs and chicks. The most common defensive response to parasitism is the rejection of foreign eggs by hosts. Different cognitive mechanisms and decision-making rules may guide both egg recognition and rejection behaviors. Classical optimization models generally assume that decisions are based on the absolute properties of the options (i.e., absolute valuation). Increasing evidence shows instead that hosts' rejection decisions also depend on the context in which options are presented (i.e., context-dependent valuation). Here we study whether the chalk-browed mockingbird's (Mimus saturninus) rejection of parasitic shiny cowbird (Molothrus bonariensis) eggs is a fixed behavior or varies with the context of the clutch. We tested three possible context-dependent mechanisms: (1) range effect, (2) habituation to variation, and (3) sensitization to variation. We found that mockingbird rejection of parasitic eggs does not change according to the characteristics of the other eggs in the nest. Thus, rejection decisions may exclusively depend on the objective characteristics of the eggs, meaning that the threshold of acceptance or rejection of a foreign egg is context-independent in this system.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
Fitting and Testing Conditional Multinormal Partial Credit Models
ERIC Educational Resources Information Center
Hessen, David J.
2012-01-01
A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…
14 CFR 27.479 - Level landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Level landing conditions. 27.479 Section 27... AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.479 Level landing..., the rotorcraft is assumed to be in each of the following level landing attitudes: (1) An attitude in...
46 CFR 172.195 - Survival conditions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... assumed damage if it meets the following conditions in the final stage of flooding: (a) Final waterline... of an opening through which progressive flooding may take place, such as an air pipe, or an opening... least 3.94 inches (10 cm). (3) Each submerged opening must be weathertight. (d) Progressive flooding. If...
46 CFR 172.195 - Survival conditions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... assumed damage if it meets the following conditions in the final stage of flooding: (a) Final waterline... of an opening through which progressive flooding may take place, such as an air pipe, or an opening... least 3.94 inches (10 cm). (3) Each submerged opening must be weathertight. (d) Progressive flooding. If...
46 CFR 172.195 - Survival conditions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... assumed damage if it meets the following conditions in the final stage of flooding: (a) Final waterline... of an opening through which progressive flooding may take place, such as an air pipe, or an opening... least 3.94 inches (10 cm). (3) Each submerged opening must be weathertight. (d) Progressive flooding. If...
Kim, Kyungil; Markman, Arthur B; Kim, Tae Hoon
2016-11-01
Research on causal reasoning has focused on the influence of covariation between candidate causes and effects on causal judgments. We suggest that the type of covariation information to which people attend is affected by the task being performed. For this, we manipulated the test questions for the evaluation of contingency information and observed its influence on both contingency learning and subsequent causal selections. When people select one cause related to an effect, they focus on conditional contingencies assuming the absence of alternative causes. When people select two causes related to an effect, they focus on conditional contingencies assuming the presence of alternative causes. We demonstrated this use of contingency information in four experiments.
Well-posedness for a class of doubly nonlinear stochastic PDEs of divergence type
NASA Astrophysics Data System (ADS)
Scarpa, Luca
2017-08-01
We prove well-posedness for doubly nonlinear parabolic stochastic partial differential equations of the form dXt - div γ (∇Xt) dt + β (Xt) dt ∋ B (t ,Xt) dWt, where γ and β are the two nonlinearities, assumed to be multivalued maximal monotone operators everywhere defined on Rd and R respectively, and W is a cylindrical Wiener process. Using variational techniques, suitable uniform estimates (both pathwise and in expectation) and some compactness results, well-posedness is proved under the classical Leray-Lions conditions on γ and with no restrictive smoothness or growth assumptions on β. The operator B is assumed to be Hilbert-Schmidt and to satisfy some classical Lipschitz conditions in the second variable.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Breakdown of the Frozen-in Condition and Plasma Acceleration: Dynamical Theory
NASA Astrophysics Data System (ADS)
Song, Y.; Lysak, R. L.
2007-12-01
The magnetic reconnection hypothesis emphasizes the importance of the breakdown of the frozen-in condition, explains the strong dependence of the geomagnetic activity on the IMF, and approximates an average qualitative description for many IMF controlled effects in magnetospheric physics. However, some important theoretical aspects of reconnection, including its definition, have not been carefully examined. The crucial components of such models, such as the largely-accepted X-line reconnection picture and the broadly-used explanations of the breakdown of the frozen-in condition, lack complete theoretical support. The important irreversible reactive interaction is intrinsically excluded and overlooked in most reconnection models. The generation of parallel electric fields must be the result of a reactive plasma interaction, which is associated with the temporal changes and spatial gradients of magnetic and velocity shears (Song and Lysak, 2006). Unlike previous descriptions of the magnetic reconnection process, which depend on dissipative-type coefficients or some passive terms in the generalized Ohm's law, the reactive interaction is a dynamical process, which favors localized high magnetic and/or mechanical stresses and a low plasma density. The reactive interaction is often closely associated with the radiation of shear Alfvén waves and is independent of any assumed dissipation coefficients. The generated parallel electric field makes an irreversible conversion between magnetic energy and the kinetic energy of the accelerated plasma and the bulk flow. We demonstrate how the reactive interaction, e.g., the nonlinear interaction of MHD mesoscale wave packets at current sheets and in the auroral acceleration region, can create and support parallel electric fields, causing the breakdown of the frozen-in condition and plasma acceleration.
NASA Astrophysics Data System (ADS)
Dimas, Athanassios A.; Kolokythas, Gerasimos A.
Numerical simulations of the free-surface flow, developing by the propagation of nonlinear water waves over a rippled bottom, are performed assuming that the corresponding flow is two-dimensional, incompressible and viscous. The simulations are based on the numerical solution of the Navier-Stokes equations subject to the fully-nonlinear free-surface boundary conditions and appropriate bottom, inflow and outflow boundary conditions. The equations are properly transformed so that the computational domain becomes time-independent. For the spatial discretization, a hybrid scheme is used where central finite-differences, in the horizontal direction, and a pseudo-spectral approximation method with Chebyshev polynomials, in the vertical direction, are applied. A fractional time-step scheme is used for the temporal discretization. Over the rippled bed, the wave boundary layer thickness increases significantly, in comparison to the one over flat bed, due to flow separation at the ripple crests, which generates alternating circulation regions. The amplitude of the wall shear stress over the ripples increases with increasing ripple height or decreasing Reynolds number, while the corresponding friction force is insensitive to the ripple height change. The amplitude of the form drag forces due to dynamic and hydrostatic pressures increase with increasing ripple height but is insensitive to the Reynolds number change, therefore, the percentage of friction in the total drag force decreases with increasing ripple height or increasing Reynolds number.
Sloan, Jamison; Sun, Yunwei; Carrigan, Charles
2016-05-01
Enforcement of the Comprehensive Nuclear Test Ban Treaty (CTBT) will involve monitoring for radiologic indicators of underground nuclear explosions (UNEs). A UNE produces a variety of radioisotopes which then decay through connected radionuclide chains. A particular species of interest is xenon, namely the four isotopes (131m)Xe, (133m)Xe, (133)Xe, and (135)Xe. Due to their half lives, some of these isotopes can exist in the subsurface for more than 100 days. This convenient timescale, combined with modern detection capabilities, makes the xenon family a desirable candidate for UNE detection. Ratios of these isotopes as a function of time have been studied in the past for distinguishing nuclear explosions from civilian nuclear applications. However, the initial yields from UNEs have been treated as fixed values. In reality, these independent yields are uncertain to a large degree. This study quantifies the uncertainty in xenon ratios as a result of these uncertain initial conditions to better bound the values that xenon ratios can assume. We have successfully used a combination of analytical and sampling based statistical methods to reliably bound xenon isotopic ratios. We have also conducted a sensitivity analysis and found that xenon isotopic ratios are primarily sensitive to only a few of many uncertain initial conditions. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Nusselder, Wilma J.; Looman, Caspar W.; Mackenbach, Johan P.
2014-01-01
Objectives. We assessed the contributions of the prevalence and disabling impact of specific diseases to educational disparities in the prevalence of disability. Methods. We examined a large representative survey of the Dutch population, the Dutch Permanent Survey of Living Conditions (2001–2007; n = 24 883; ages 40–97 years). We attributed the prevalence of disability to chronic diseases by using their empirical associations and assuming independent competing causes of disability. We estimated contributions of prevalence and the disabling impact of diseases to disparities in disability using counterfactuals. Results. We found that the prevalence of disability in individuals with only an elementary education was 19 to 20 percentage points higher than that in individuals with tertiary education. Sixty-five percent of this difference could be attributed to specific chronic diseases, but more so to their disabling impact (49%–51%) than to their prevalence (20%–29%). Back pain, neck or arm conditions, and peripheral vascular disease contributed most to the disparity in men, and arthritis, back pain, and chronic nonspecific lung disease contributed most to the disparity in women. Conclusions. Educational disparities in the burden of disability were primarily caused by high disabling impacts of chronic diseases among low educated groups. Tackling disparities might require more effective treatment or rehabilitation of disability in lower socioeconomic groups. PMID:24922134
NASA Astrophysics Data System (ADS)
Rodgers, Ku`ulei S.; Kido, Michael H.; Jokiel, Paul L.; Edmonds, Tim; Brown, Eric K.
2012-07-01
A linkage between the condition of watersheds and adjacent nearshore coral reef communities is an assumed paradigm in the concept of integrated coastal management. However, quantitative evidence for this "catchment to sea" or "ridge to reef" relationship on oceanic islands is lacking and would benefit from the use of appropriate marine and terrestrial landscape indicators to quantify and evaluate ecological status on a large spatial scale. To address this need, our study compared the Hawai`i Watershed Health Index (HI-WHI) and Reef Health Index (HI-RHI) derived independently of each other over the past decade. Comparisons were made across 170 coral reef stations at 52 reef sites adjacent to 42 watersheds throughout the main Hawaiian Islands. A significant positive relationship was shown between the health of watersheds and that of adjacent reef environments when all sites and depths were considered. This relationship was strongest for sites facing in a southerly direction, but diminished for north facing coasts exposed to persistent high surf. High surf conditions along the north shore increase local wave driven currents and flush watershed-derived materials away from nearshore waters. Consequently, reefs in these locales are less vulnerable to the deposition of land derived sediments, nutrients and pollutants transported from watersheds to ocean. Use of integrated landscape health indices can be applied to improve regional-scale conservation and resource management.
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
Running with rugby balls: bulk renormalization of codimension-2 branes
NASA Astrophysics Data System (ADS)
Williams, M.; Burgess, C. P.; van Nierop, L.; Salvio, A.
2013-01-01
We compute how one-loop bulk effects renormalize both bulk and brane effective interactions for geometries sourced by codimension-two branes. We do so by explicitly integrating out spin-zero, -half and -one particles in 6-dimensional Einstein-Maxwell-Scalar theories compactified to 4 dimensions on a flux-stabilized 2D geometry. (Our methods apply equally well for D dimensions compactified to D - 2 dimensions, although our explicit formulae do not capture all divergences when D > 6.) The renormalization of bulk interactions are independent of the boundary conditions assumed at the brane locations, and reproduce standard heat-kernel calculations. Boundary conditions at any particular brane do affect how bulk loops renormalize this brane's effective action, but not the renormalization of other distant branes. Although we explicitly compute our loops using a rugby ball geometry, because we follow only UV effects our results apply more generally to any geometry containing codimension-two sources with conical singularities. Our results have a variety of uses, including calculating the UV sensitivity of one-loop vacuum energy seen by observers localized on the brane. We show how these one-loop effects combine in a surprising way with bulk back-reaction to give the complete low-energy effective cosmological constant, and comment on the relevance of this calculation to proposed applications of codimension-two 6D models to solutions of the hierarchy and cosmological constant problems.
Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.
2004-01-01
For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.
Ramos-Jiliberto, Rodrigo; González-Olivares, Eduardo; Bozinovic, Francisco
2002-08-01
We present a predator-prey metaphysiological model, based on the available behavioral and physiological information of the sigmodontine rodent Phyllotis darwini. The model is focused on the population-level consequences of the antipredator behavior, performed by the rodent population, which is assumed to be an inducible response of predation avoidance. The decrease in vulnerability is explicitly considered to have two associated costs: a decreasing foraging success and an increasing metabolic loss. The model analysis was carried out on a reduced form of the system by means of numerical and analytical tools. We evaluated the stability properties of equilibrium points in the phase plane, and carried out bifurcation analyses of rodent equilibrium density under varying conditions of three relevant parameters. The bifurcation parameters chosen represent predator avoidance effectiveness (A), foraging cost of antipredator behavior (C(1)'), and activity-metabolism cost (C(4)'). Our analysis suggests that the trade-offs involved in antipredator behavior plays a fundamental role in the stability properties of the system. Under conditions of high foraging cost, stability decreases as antipredator effectiveness increases. Under the complementary scenario (not considering the highest foraging costs), the equilibria are either stable when both costs are low, or unstable when both costs are higher, independent of antipredator effectiveness. No evidence of stabilizing effects of antipredator behavior was found. Copyright 2002 Elsevier Science (USA).
ERIC Educational Resources Information Center
Johnson, Amy M.; Azevedo, Roger; D'Mello, Sidney K.
2011-01-01
This study examined the temporal and dynamic nature of students' self-regulatory processes while learning about the circulatory system with hypermedia. A total of 74 undergraduate students were randomly assigned to 1 of 2 conditions: independent learning or externally assisted learning. Participants in the independent learning condition used a…
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
NASA Technical Reports Server (NTRS)
Wu, R. W.; Witmer, E. A.
1972-01-01
Assumed-displacement versions of the finite-element method are developed to predict large-deformation elastic-plastic transient deformations of structures. Both the conventional and a new improved finite-element variational formulation are derived. These formulations are then developed in detail for straight-beam and curved-beam elements undergoing (1) Bernoulli-Euler-Kirchhoff or (2) Timoshenko deformation behavior, in one plane. For each of these categories, several types of assumed-displacement finite elements are developed, and transient response predictions are compared with available exact solutions for small-deflection, linear-elastic transient responses. The present finite-element predictions for large-deflection elastic-plastic transient responses are evaluated via several beam and ring examples for which experimental measurements of transient strains and large transient deformations and independent finite-difference predictions are available.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Burton, W. S.
1992-01-01
Analytic three-dimensional thermoelasticity solutions are presented for the thermal buckling of multilayered angle-ply composite plates with temperature-dependent thermoelastic properties. Both the critical temperatures and the sensitivity derivatives are computed. The sensitivity derivatives measure the sensitivity of the buckling response to variations in the different lamination and material parameters of the plate. The plates are assumed to have rectangular geometry and an antisymmetric lamination with respect to the middle plane. The temperature is assumed to be independent of the surface coordinates, but has an arbitrary symmetric variation through the thickness of the plate. The prebuckling deformations are accounted for. Numerical results are presented, for plates subjected to uniform temperature increase, showing the effects of temperature-dependent material properties on the prebuckling stresses, critical temperatures, and their sensitivity derivatives.
Ma, Ke; Hommel, Bernhard
2013-01-01
The rubber hand illusion refers to the observation that participants perceive “body ownership” for a rubber hand if it moves, or is stroked in synchrony with the participant's real (covered) hand. Research indicates that events targeting artificial body parts can trigger affective responses (affective resonance) only with perceived body ownership, while neuroscientific findings suggest affective resonance irrespective of ownership (e.g., when observing other individuals under threat). We hypothesized that this may depend on the severity of the event. We first replicated previous findings that the rubber hand illusion can be extended to virtual hands—the virtual-hand illusion. We then tested whether hand ownership and affective resonance (assessed by galvanic skin conductance) are modulated by the experience of an event that either “impacted” (a ball hitting the hand) or “threatened” (a knife cutting the hand) the virtual hand. Ownership was stronger if the virtual hand moved synchronously with the participant's own hand, but this effect was independent from whether the hand was impacted or threatened. Affective resonance was mediated by ownership however: In the face of mere impact, participants showed more resonance in the synchronous condition (i.e., with perceived ownership) than in the asynchronous condition. In the face of threat, in turn, affective resonance was independent of synchronicity—participants were emotionally involved even if a threat was targeting a hand that they did not perceive as their own. Our findings suggest that perceived body ownership and affective responses to body-related impact or threat can be dissociated and are thus unlikely to represent the same underlying process. We argue that affective reactions to impact are produced in a top-down fashion if the impacted effector is assumed to be part of one's own body, whereas threatening events trigger affective responses more directly in a bottom-up fashion—irrespective of body ownership. PMID:24046762
Chronic consequences of acute injuries: worse survival after discharge.
Shafi, Shahid; Renfro, Lindsay A; Barnes, Sunni; Rayan, Nadine; Gentilello, Larry M; Fleming, Neil; Ballard, David
2012-09-01
The Trauma Quality Improvement Program uses inhospital mortality to measure quality of care, which assumes patients who survive injury are not likely to suffer higher mortality after discharge. We hypothesized that survival rates in trauma patients who survive to discharge remain stable afterward. Patients treated at an urban Level I trauma center (2006-2008) were linked with the Social Security Administration Death Master File. Survival rates were measured at 30, 90, and 180 days and 1 and 2 years from injury among two groups of trauma patients who survived to discharge: major trauma (Abbreviated Injury Scale score ≥ 3 injuries, n = 2,238) and minor trauma (Abbreviated Injury Scale score ≤ 2 injuries, n = 1,171). Control groups matched to each trauma group by age and sex were simulated from the US general population using annual survival probabilities from census data. Kaplan-Meier and log-rank analyses conditional upon survival to each time point were used to determine changes in risk of mortality after discharge. Cox proportional hazards models with left truncation at the time of discharge were used to determine independent predictors of mortality after discharge. The survival rate in trauma patients with major injuries was 92% at 30 days posttrauma and declined to 84% by 3 years (p > 0.05 compared with general population). Minor trauma patients experienced a survival rate similar to the general population. Age and injury severity were the only independent predictors of long-term mortality given survival to discharge. Log-rank tests conditional on survival to each time point showed that mortality risk in patients with major injuries remained significantly higher than the general population for up to 6 months after injury. The survival rate of trauma patients with major injuries remains significantly lower than survival for minor trauma patients and the general population for several months postdischarge. Surveillance for early identification and treatment of complications may be needed for trauma patients with major injuries. Prognostic study, level III.
Gonorazky, Sergio E
2008-01-01
The Administración Nacional de Medicamentos, Alimentos y Tecnología Médica de la República Argentina (ANMAT) requires that an independent ethics committee of sponsors and/or researchers must previously evaluate and approve all the new pharmacological research protocols carried out on human beings. However, due to the lucrative nature of the evaluation, and because the selection of the Independent Ethics Committee is carried out by the sponsors and/or researchers, the assumed autonomy of the former can be reduced to merely a relationship of "service provider-customer". The Institutional Review Board of the Mar del Plata s Community Hospital has evaluated, between 2005 and 2006, thirty three research protocols (with their corresponding information sheets for patients and informed consent forms) previously approved by a non-institutional Independent Ethics Committee. The median number of objections made by the Institutional Review Board, which prompted the previously mentioned protocols to be modified in order to be approved, was of three per protocol. In other words, the accreditation of an Independent Ethics Committee requires a system that guarantees actual independence from the sponsors and/or researchers, as well as management control mechanisms that may lead them into an eventual loss of accreditation. Several measures are proposed in order to correct the deficiencies of the present system.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
A Description of Local Time Asymmetries in the Kronian Current Sheet
NASA Astrophysics Data System (ADS)
Nickerson, J. S.; Hansen, K. C.; Gombosi, T. I.
2012-12-01
Cassini observations imply that Saturn's magnetospheric current sheet is displaced northward above the rotational equator [C.S. Arridge et al., Warping of Saturn's magnetospheric and magnetotail current sheets, Journal of Geophysical Research, Vol. 113, August 2008]. Arridge et al. show that this hinging of the current sheet above the equator occurs over the noon, midnight, and dawn local time sectors. They present an azimuthally independent model to describe this paraboloid-like geometry. We have used our global MHD model, BATS-R-US/SWMF, to study Saturn's magnetospheric current sheet under various solar wind dynamic pressure and solar zenith angle conditions. We show that under reasonable conditions the current sheet does take on the basic shape of the Arridge model in the noon, midnight, and dawn sectors. However, the hinging distance parameter used in the Arridge model is not a constant and does in fact vary in Saturn local time. We recommend that the Arridge model should be adjusted to account for this azimuthal dependence. Arridge et al. does not discuss the shape of the current sheet in the dusk sector due to an absence of data but does presume that the current sheet will assume the same geometry in this region. On the contrary, our model shows that this is not the case. On the dusk side the current sheet hinges (aggressively) southward and cannot be accounted for by the Arridge model. We will present results from our simulations showing the deviation from axisymmetry and the general behavior of the current sheet under different conditions.
Massonnet, Catherine; Vile, Denis; Fabre, Juliette; Hannah, Matthew A.; Caldana, Camila; Lisec, Jan; Beemster, Gerrit T.S.; Meyer, Rhonda C.; Messerli, Gaëlle; Gronlund, Jesper T.; Perkovic, Josip; Wigmore, Emma; May, Sean; Bevan, Michael W.; Meyer, Christian; Rubio-Díaz, Silvia; Weigel, Detlef; Micol, José Luis; Buchanan-Wollaston, Vicky; Fiorani, Fabio; Walsh, Sean; Rinn, Bernd; Gruissem, Wilhelm; Hilson, Pierre; Hennig, Lars; Willmitzer, Lothar; Granier, Christine
2010-01-01
A major goal of the life sciences is to understand how molecular processes control phenotypes. Because understanding biological systems relies on the work of multiple laboratories, biologists implicitly assume that organisms with the same genotype will display similar phenotypes when grown in comparable conditions. We investigated to what extent this holds true for leaf growth variables and metabolite and transcriptome profiles of three Arabidopsis (Arabidopsis thaliana) genotypes grown in 10 laboratories using a standardized and detailed protocol. A core group of four laboratories generated similar leaf growth phenotypes, demonstrating that standardization is possible. But some laboratories presented significant differences in some leaf growth variables, sometimes changing the genotype ranking. Metabolite profiles derived from the same leaf displayed a strong genotype × environment (laboratory) component. Genotypes could be separated on the basis of their metabolic signature, but only when the analysis was limited to samples derived from one laboratory. Transcriptome data revealed considerable plant-to-plant variation, but the standardization ensured that interlaboratory variation was not considerably larger than intralaboratory variation. The different impacts of the standardization on phenotypes and molecular profiles could result from differences of temporal scale between processes involved at these organizational levels. Our findings underscore the challenge of describing, monitoring, and precisely controlling environmental conditions but also demonstrate that dedicated efforts can result in reproducible data across multiple laboratories. Finally, our comparative analysis revealed that small variations in growing conditions (light quality principally) and handling of plants can account for significant differences in phenotypes and molecular profiles obtained in independent laboratories. PMID:20200072
Settling behavior of unpurified Cryptosporidium oocysts in laboratory settling columns.
Young, Pamela L; Komisar, Simeon J
2005-04-15
The settling behavior of fresh and aged unpurified oocysts was examined in settling column suspensions with varied ionic strengths and concentrations of calcium and magnesium. Independent measurements of the size and density of unpurified oocysts were performed to determine a theoretical settling velocity for the test populations. Viability of the oocysts was assessed using a dye permeability assay. Latex microspheres were included to provide a standard by which to assess the settling conditions in the columns. Mean settling velocities for viable oocysts measured in this work were faster than predicted and faster than measured for purified oocysts in other work: 1.31 (+/-0.21) microm/s for viable oocysts from populations having a low percentage of viable oocysts and 1.05 (+/-0.20) microm/s for viable oocysts from populations with a high percentage of viable oocysts. Results were attributed to the higher than previously reported densities measured for oocysts in this study and the presence of fecal material, which allowed opportunity for particle agglomeration. Settling velocity of oocysts was significantly related to the viability of the population, particle concentration, ionic strength, and presence of calcium and magnesium in the suspending medium. Behavior of the latex microspheres was not entirely predictive of the behavior of the oocysts under the test conditions. Viable oocysts may have a greater probability of settling than previously assumed; however, nonviable, and especially nonintact, oocysts have the potential to be significantly transported in water. This work underscores the importance of assessing the viability of oocysts to predict their response to environmental and experimental conditions.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Revisiting the bulge-halo conspiracy - II. Towards explaining its puzzling dependence on redshift
NASA Astrophysics Data System (ADS)
Shankar, Francesco; Sonnenfeld, Alessandro; Grylls, Philip; Zanisi, Lorenzo; Nipoti, Carlo; Chae, Kyu-Hyun; Bernardi, Mariangela; Petrillo, Carlo Enrico; Huertas-Company, Marc; Mamon, Gary A.; Buchan, Stewart
2018-04-01
We carry out a systematic investigation of the total mass density profile of massive (log Mstar/M⊙ ˜ 11.5) early-type galaxies and its dependence on redshift, specifically in the range 0 ≲ z ≲ 1. We start from a large sample of Sloan Digital Sky Survey early-type galaxies with stellar masses and effective radii measured assuming two different profiles, de Vaucouleurs and Sérsic. We assign dark matter haloes to galaxies via abundance matching relations with standard ΛCDM profiles and concentrations. We then compute the total, mass-weighted density slope at the effective radius γ΄, and study its redshift dependence at fixed stellar mass. We find that a necessary condition to induce an increasingly flatter γ΄ at higher redshifts, as suggested by current strong lensing data, is to allow the intrinsic stellar profile of massive galaxies to be Sérsic and the input Sérsic index n to vary with redshift as n(z) ∝ (1 + z)δ, with δ ≲ -1. This conclusion holds irrespective of the input Mstar-Mhalo relation, the assumed stellar initial mass function (IMF), or even the chosen level of adiabatic contraction in the model. Secondary contributors to the observed redshift evolution of γ΄ may come from an increased contribution at higher redshifts of adiabatic contraction and/or bottom-light stellar IMFs. The strong lensing selection effects we have simulated seem not to contribute to this effect. A steadily increasing Sérsic index with cosmic time is supported by independent observations, though it is not yet clear whether cosmological hierarchical models (e.g. mergers) are capable of reproducing such a fast and sharp evolution.
Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R
2016-03-30
A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Modeling Thermal Contact Resistance
NASA Technical Reports Server (NTRS)
Kittel, Peter; Sperans, Joel (Technical Monitor)
1994-01-01
One difficulty in using cryocoolers is making good thermal contact between the cooler and the instrument being cooled. The connection is often made through a bolted joint. The temperature drop associated with this joint has been the subject of many experimental and theoretical studies. The low temperature behavior of dry joints have shown some anomalous dependence on the surface condition of the mating parts. There is also some doubts on how well one can extrapolate from the test samples to predicting the performance of a real system. Both finite element and analytic models of a simple contact system have been developed. The model assumes (a) the contact is dry (contact limited to a small portion of the total available area and the spaces in-between the actual contact patches are perfect insulators), (b) contacts are clean (conductivity of the actual contact is the same as the bulk), (c) small temperature gradients (the bulk conductance may be assumed to be temperature independent), (d) the absolute temperature is low (thermal radiation effects are ignored), and (e) the dimensions of the nominal contact area are small compared to the thickness of the bulk material (the contact effects are localized near the contact). The models show that in the limit of actual contact area much less than the nominal area (a much less than A), that the excess temperature drop due to a single point of contact scales as a(exp -1/2). This disturbance only extends a distance approx. A(exp 1/2) into the bulk material. A group of identical contacts will result in an excess temperature drop that scales as n(exp -1/2), where n is the number of contacts and n dot a is constant. This implies that flat rough surfaces will have a lower excess temperature drop than flat polished surfaces.
NASA Astrophysics Data System (ADS)
Ibrahim, Adyda; Saaban, Azizan; Zaibidi, Nerda Zura
2017-11-01
This paper considers an n-firm oligopoly market where each firm produces a single homogenous product under a constant unit cost. Nonlinearity is introduced into the model of this oligopoly market by assuming the market has an isoelastic demand function. Furthermore, instead of the usual assumption of perfectly rational firms, they are assumed to be boundedly rational in adjusting their outputs at each period. The equilibrium of this n discrete dimensional system is obtained and its local stability is calculated.
Teacher Working Conditions in Charter Schools and Traditional Public Schools: A Comparative Study
ERIC Educational Resources Information Center
Ni, Yongmei
2012-01-01
Background/Context: Teachers affect student performance through their interaction with students in the context of the classrooms and schools where teaching and learning take place. Although it is widely assumed that supportive working conditions improve the quality of instruction and teachers' willingness to remain in a school, little is known…
ERIC Educational Resources Information Center
Zarate, Maria Estela
2007-01-01
The Latino community has been characterized by low high school graduation rates, low college completion rates and substandard schooling conditions. As schools and policymakers seek to improve the educational conditions of Latinos, parental influence in the form of school involvement is assumed to play some role in shaping students' educational…
On Complicated Expansions of Solutions to ODES
NASA Astrophysics Data System (ADS)
Bruno, A. D.
2018-03-01
Polynomial ordinary differential equations are studied by asymptotic methods. The truncated equation associated with a vertex or a nonhorizontal edge of their polygon of the initial equation is assumed to have a solution containing the logarithm of the independent variable. It is shown that, under very weak constraints, this nonpower asymptotic form of solutions to the original equation can be extended to an asymptotic expansion of these solutions. This is an expansion in powers of the independent variable with coefficients being Laurent series in decreasing powers of the logarithm. Such expansions are sometimes called psi-series. Algorithms for such computations are described. Six examples are given. Four of them are concern with Painlevé equations. An unexpected property of these expansions is revealed.
Estimated Performance of Radial-Flow Exit Nozzles for Air in Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Englert, Gerald W.; Kochendorfer, Fred D.
1959-01-01
The thrust, boundary-layer, and heat-transfer characteristics were computed for nozzles having radial flow in the divergent part. The working medium was air in chemical equilibrium, and the boundary layer was assumed to be all turbulent. Stagnation pressure was varied from 1 to 32 atmospheres, stagnation temperature from 1000 to 6000 R, and wall temperature from 1000 to 3000 R. Design pressure ratio was varied from 5 to 320, and operating pressure ratio was varied from 0.25 to 8 times the design pressure ratio. Results were generalized independent of divergence angle and were also generalized independent of stagnation pressure in the temperature range of 1000 to 3000 R. A means of determining the aerodynamically optimum wall angle is provided.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Modeling haplotype block variation using Markov chains.
Greenspan, G; Geiger, D
2006-04-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity.
Modeling Haplotype Block Variation Using Markov Chains
Greenspan, G.; Geiger, D.
2006-01-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity. PMID:16361244
Experimental Measurement-Device-Independent Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Liu, Yang; Chen, Teng-Yun; Wang, Liu-Jun; Liang, Hao; Shentu, Guo-Liang; Wang, Jian; Cui, Ke; Yin, Hua-Lei; Liu, Nai-Le; Li, Li; Ma, Xiongfeng; Pelc, Jason S.; Fejer, M. M.; Peng, Cheng-Zhi; Zhang, Qiang; Pan, Jian-Wei
2013-09-01
Quantum key distribution is proven to offer unconditional security in communication between two remote users with ideal source and detection. Unfortunately, ideal devices never exist in practice and device imperfections have become the targets of various attacks. By developing up-conversion single-photon detectors with high efficiency and low noise, we faithfully demonstrate the measurement-device-independent quantum-key-distribution protocol, which is immune to all hacking strategies on detection. Meanwhile, we employ the decoy-state method to defend attacks on a nonideal source. By assuming a trusted source scenario, our practical system, which generates more than a 25 kbit secure key over a 50 km fiber link, serves as a stepping stone in the quest for unconditionally secure communications with realistic devices.
Experimental measurement-device-independent quantum key distribution.
Liu, Yang; Chen, Teng-Yun; Wang, Liu-Jun; Liang, Hao; Shentu, Guo-Liang; Wang, Jian; Cui, Ke; Yin, Hua-Lei; Liu, Nai-Le; Li, Li; Ma, Xiongfeng; Pelc, Jason S; Fejer, M M; Peng, Cheng-Zhi; Zhang, Qiang; Pan, Jian-Wei
2013-09-27
Quantum key distribution is proven to offer unconditional security in communication between two remote users with ideal source and detection. Unfortunately, ideal devices never exist in practice and device imperfections have become the targets of various attacks. By developing up-conversion single-photon detectors with high efficiency and low noise, we faithfully demonstrate the measurement-device-independent quantum-key-distribution protocol, which is immune to all hacking strategies on detection. Meanwhile, we employ the decoy-state method to defend attacks on a nonideal source. By assuming a trusted source scenario, our practical system, which generates more than a 25 kbit secure key over a 50 km fiber link, serves as a stepping stone in the quest for unconditionally secure communications with realistic devices.
NASA Technical Reports Server (NTRS)
Nelson, D. P.
1981-01-01
Tabulated data from wind tunnel tests conducted to evaluate the aerodynamic performance of an advanced coannular exhaust nozzle for a future supersonic propulsion system are presented. Tests were conducted with two test configurations: (1) a short flap mechanism for fan stream control with an isentropic contoured flow splitter, and (2) an iris fan nozzle with a conical flow splitter. Both designs feature a translating primary plug and an auxiliary inlet ejector. Tests were conducted at takeoff and simulated cruise conditions. Data were acquired at Mach numbers of 0, 0.36, 0.9, and 2.0 for a wide range of nozzle operating conditions. At simulated supersonic cruise, both configurations demonstrated good performance, comparable to levels assumed in earlier advanced supersonic propulsion studies. However, at subsonic cruise, both configurations exhibited performance that was 6 to 7.5 percent less than the study assumptions. At takeoff conditions, the iris configuration performance approached the assumed levels, while the short flap design was 4 to 6 percent less. Data are provided through test run 25.
NASA Technical Reports Server (NTRS)
Nelson, D. P.
1980-01-01
Wind tunnel tests were conducted to evaluate the aerodynamic performance of a coannular exhaust nozzle for a proposed variable stream control supersonic propulsion system. Tests were conducted with two simulated configurations differing primarily in the fan duct flowpaths: a short flap mechanism for fan stream control with an isentropic contoured flow splitter, and an iris fan nozzle with a conical flow splitter. Both designs feature a translating primary plug and an auxiliary inlet ejector. Tests were conducted at takeoff and simulated cruise conditions. Data were acquired at Mach numbers of 0, 0.36, 0.9, and 2.0 for a wide range of nozzle operating conditions. At simulated supersonic cruise, both configurations demonstrated good performance, comparable to levels assumed in earlier advanced supersonic propulsion studies. However, at subsonic cruise, both configurations exhibited performance that was 6 to 7.5 percent less than the study assumptions. At take off conditions, the iris configuration performance approached the assumed levels, while the short flap design was 4 to 6 percent less.