Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Further Studies into Synthetic Image Generation using CameoSim
2011-08-01
preparation of the validation effort a study of BRDF models has been completed, which includes the physical plausibility of models , how measured data...the visible to shortwave infrared. In preparation of the validation effort a study of BRDF models has been completed, which includes the physical...Example..................................................................................................................... 17 4. MODELLING BRDFS
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
Delta: a new web-based 3D genome visualization and analysis platform.
Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua
2018-04-15
Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
Physical plausibility of cold star models satisfying Karmarkar conditions
NASA Astrophysics Data System (ADS)
Fuloria, Pratibha; Pant, Neeraj
2017-11-01
In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9 . The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models.
A Biomass-based Model to Estimate the Plausibility of Exoplanet Biosignature Gases
NASA Astrophysics Data System (ADS)
Seager, S.; Bains, W.; Hu, R.
2013-10-01
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H2S, CH4, CH3Cl, and DMS. We have applied the models to propose NH3 as a biosignature gas on a "cold Haber World," a planet with a N2-H2 atmosphere, and to demonstrate why gases such as CH3Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH4, H2S, and N2O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH3Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.
A Physics-Based Vibrotactile Feedback Library for Collision Events.
Park, Gunhyuk; Choi, Seungmoon
2017-01-01
We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
What Does Quantum Physics Have to Do with Behavior Disorders?
ERIC Educational Resources Information Center
Center, David B.
This paper argues that human agency as a causal factor in behavior must be considered in any model of behavior and behavior disorders. Since human agency is historically tied to the issue of consciousness, to argue that consciousness plays a causal role in behavior requires a plausible explanation of consciousness. This paper proposes that…
Scenario planning: a tool for academic health sciences libraries.
Ludwig, Logan; Giesecke, Joan; Walton, Linda
2010-03-01
Review the International Campaign to Revitalise Academic Medicine (ICRAM) Future Scenarios as a potential starting point for developing scenarios to envisage plausible futures for health sciences libraries. At an educational workshop, 15 groups, each composed of four to seven Association of Academic Health Sciences Libraries (AAHSL) directors and AAHSL/NLM Fellows, created plausible stories using the five ICRAM scenarios. Participants created 15 plausible stories regarding roles played by health sciences librarians, how libraries are used and their physical properties in response to technology, scholarly communication, learning environments and health care economic changes. Libraries are affected by many forces, including economic pressures, curriculum and changes in technology, health care delivery and scholarly communications business models. The future is likely to contain ICRAM scenario elements, although not all, and each, if they come to pass, will impact health sciences libraries. The AAHSL groups identified common features in their scenarios to learn lessons for now. The hope is that other groups find the scenarios useful in thinking about academic health science library futures.
NASA Astrophysics Data System (ADS)
Mori, Kaya; Chonko, James C.; Hailey, Charles J.
2005-10-01
We have reanalyzed the 260 ks XMM-Newton observation of 1E 1207.4-5209. There are several significant improvements over previous work. First, a much broader range of physically plausible spectral models was used. Second, we have used a more rigorous statistical analysis. The standard F-distribution was not employed, but rather the exact finite statistics F-distribution was determined by Monte Carlo simulations. This approach was motivated by the recent work of Protassov and coworkers and Freeman and coworkers. They demonstrated that the standard F-distribution is not even asymptotically correct when applied to assess the significance of additional absorption features in a spectrum. With our improved analysis we do not find a third and fourth spectral feature in 1E 1207.4-5209 but only the two broad absorption features previously reported. Two additional statistical tests, one line model dependent and the other line model independent, confirmed our modified F-test analysis. For all physically plausible continuum models in which the weak residuals are strong enough to fit, the residuals occur at the instrument Au M edge. As a sanity check we confirmed that the residuals are consistent in strength and position with the instrument Au M residuals observed in 3C 273.
Diagnosis by integrating model-based reasoning with knowledge-based reasoning
NASA Technical Reports Server (NTRS)
Bylander, Tom
1988-01-01
Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.
NASA Astrophysics Data System (ADS)
Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.
2017-12-01
Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.
NASA Astrophysics Data System (ADS)
Maldonado, Solvey; Findeisen, Rolf
2010-06-01
The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.
Jouffre, Stéphane
2015-01-01
Individuals attempting to label their emotions look for a plausible source of their physiological arousal. Co-occurrence of plausible sources can lead to the misattribution of real (or bogus) physiological arousal, resulting in physically attractive individuals being perceived as more attractive than they actually are. In two experiments, female participants heard bogus heart rate feedback while viewing photos of attractive male models. Compared with low-power and control participants, high-power participants rated reinforced photos (increased heart rate) more attractive than non-reinforced photos (stable heart rate) to a greater extent when they heard their own bogus heart rate feedback (Experiments 1 and 2) and to a lesser extent when they heard a recording of another participant's heart rate (Experiment 2). These findings, which suggest that power increases the tendency to misattribute one's physiological arousal to physically attractive individuals, are discussed with reference to theories linking power and social perception. © 2014 by the Society for Personality and Social Psychology, Inc.
You, Sukkyung; Shin, Kyulee
2017-12-01
Physically active leisure plays a key role in successful aging. Exercise beliefs are one of the key predictors of exercise behavior. We used structural equation modeling to assess the plausibility of a conceptual model specifying hypothesized linkages among middle-aged adults' perceptions of (a) exercise beliefs, (b) physical exercise behavior, and (c) subjective well-being. Four hundred two adults in South Korea responded to survey questions designed to capture the above constructs. We found that physically active leisure participation leads to subjective well-being for both middle-aged men and women. However, men and women exercised for different reasons. Women exercised for the sake of their physical appearance and mental and emotional functioning, whereas men exercised for the sake of their social desirability and vulnerability to disease and aging. Based on our results, we suggest that men tend to show higher social face sensitivity, while women show more appearance management behavior. Based on these findings, we discussed the implications and future research directions.
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
NASA Astrophysics Data System (ADS)
Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2016-05-01
Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, S.; Bains, W.; Hu, R.
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H{sub 2}S, CH{sub 4}, CH{sub 3}Cl, and DMS. We have applied the models to propose NH{sub 3} as a biosignature gas on amore » 'cold Haber World', a planet with a N{sub 2}-H{sub 2} atmosphere, and to demonstrate why gases such as CH{sub 3}Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH{sub 4}, H{sub 2}S, and N{sub 2}O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH{sub 3}Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.« less
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
NASA Astrophysics Data System (ADS)
Montes-Solís, María; Arregui, Iñigo
2017-09-01
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternative mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montes-Solís, María; Arregui, Iñigo, E-mail: mmsolis@iac.es
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternativemore » mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.« less
The new AP Physics exams: Integrating qualitative and quantitative reasoning
NASA Astrophysics Data System (ADS)
Elby, Andrew
2015-04-01
When physics instructors and education researchers emphasize the importance of integrating qualitative and quantitative reasoning in problem solving, they usually mean using those types of reasoning serially and separately: first students should analyze the physical situation qualitatively/conceptually to figure out the relevant equations, then they should process those equations quantitatively to generate a solution, and finally they should use qualitative reasoning to check that answer for plausibility (Heller, Keith, & Anderson, 1992). The new AP Physics 1 and 2 exams will, of course, reward this approach to problem solving. But one kind of free response question will demand and reward a further integration of qualitative and quantitative reasoning, namely mathematical modeling and sense-making--inventing new equations to capture a physical situation and focusing on proportionalities, inverse proportionalities, and other functional relations to infer what the equation ``says'' about the physical world. In this talk, I discuss examples of these qualitative-quantitative translation questions, highlighting how they differ from both standard quantitative and standard qualitative questions. I then discuss the kinds of modeling activities that can help AP and college students develop these skills and habits of mind.
The Universal Plausibility Metric (UPM) & Principle (UPP).
Abel, David L
2009-12-03
Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of xi is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (xi < 1).
ERIC Educational Resources Information Center
Phillips, Lawrence; Pearl, Lisa
2015-01-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Physically-based in silico light sheet microscopy for visualizing fluorescent brain models
2015-01-01
Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404
The Universal Plausibility Metric (UPM) & Principle (UPP)
2009-01-01
Background Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of ξ is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. Conclusion No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (ξ < 1). PMID:19958539
Radiation signatures from a locally energized flaring loop
NASA Technical Reports Server (NTRS)
Emslie, A. G.; Vlahos, L.
1980-01-01
The radiation signatures from a locally energized solar flare loop based on the physical properties of the energy release mechanisms were consistent with hard X-ray, microwave, and EUV observations for plausible source parameters. It was found that a suprathermal tail of high energy electrons is produced by the primary energy release, and that the number of energetic charged particles ejected into the interplanetary medium in the model is consistent with observations. The radiation signature model predicts that the intrinsic polarization of the hard X-ray burst should increase over the photon energy range of 20 to 100 keV.
Stringent and efficient assessment of boson-sampling devices.
Tichy, Malte C; Mayer, Klaus; Buchleitner, Andreas; Mølmer, Klaus
2014-07-11
Boson sampling holds the potential to experimentally falsify the extended Church-Turing thesis. The computational hardness of boson sampling, however, complicates the certification that an experimental device yields correct results in the regime in which it outmatches classical computers. To certify a boson sampler, one needs to verify quantum predictions and rule out models that yield these predictions without true many-boson interference. We show that a semiclassical model for many-boson propagation reproduces coarse-grained observables that are proposed as witnesses of boson sampling. A test based on Fourier matrices is demonstrated to falsify physically plausible alternatives to coherent many-boson propagation.
Upscaling pore pressure-dependent gas permeability in shales
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Javadpour, Farzam
2017-04-01
Upscaling pore pressure dependence of shale gas permeability is of great importance and interest in the investigation of gas production in unconventional reservoirs. In this study, we apply the Effective Medium Approximation, an upscaling technique from statistical physics, and modify the Doyen model for unconventional rocks. We develop an upscaling model to estimate the pore pressure-dependent gas permeability from pore throat size distribution, pore connectivity, tortuosity, porosity, and gas characteristics. We compare our adapted model with six data sets: three experiments, one pore-network model, and two lattice-Boltzmann simulations. Results showed that the proposed model estimated the gas permeability within a factor of 3 of the measurements/simulations in all data sets except the Eagle Ford experiment for which we discuss plausible sources of discrepancies.
ERIC Educational Resources Information Center
Leon, Arthur S.; Norstrom, Jane
1995-01-01
This paper presents epidemiologic evidence on the contributions of physical inactivity and reduced cardiorespiratory fitness to risk of coronary heart disease (CHD). The types and dose of physical activity to reduce risk of CHD and plausible biologic mechanisms for the partial protective effect are reviewed. (Author/SM)
How Physicists Made Stable Lévy Processes Physically Plausible
NASA Astrophysics Data System (ADS)
Schinckus, Christophe
2013-08-01
Stable Lévy processes have very interesting properties for describing the complex behaviour of non-equilibrium dissipative systems such as turbulence, anomalous diffusion or financial markets. However, although these processes better fit the empirical data, some of their statistical properties can raise several theoretical problems in empirical applications because they generate infinite variables. Econophysicists have developed statistical solutions to make these processes physically plausible. This paper presents a review of these analytical solutions (truncations) for stable Lévy processes and how econophysicists transformed them into data-driven processes. The evolution of these analytical solutions is presented as a progressive research programme provided by (econo)physicists for theoretical problems encountered in financial economics in the 1960s and the 1970s.
A cosmic book. [of physics of early universe
NASA Technical Reports Server (NTRS)
Peebles, P. J. E.; Silk, Joseph
1988-01-01
A system of assigning odds to the basic elements of cosmological theories is proposed in order to evaluate the strengths and weaknesses of the theories. A figure of merit for the theories is obtained by counting and weighing the plausibility of each of the basic elements that is not substantially supported by observation or mature fundamental theory. The magnetized strong model is found to be the most probable. In order of decreasing probability, the ranking for the rest of the models is: (1) the magnetized string model with no exotic matter and the baryon adiabatic model; (2) the hot dark matter model and the model of cosmic string loops; (3) the canonical cold dark matter model, the cosmic string loops model with hot dark matter, and the baryonic isocurvature model; and (4) the cosmic string loops model with no exotic matter.
Impaired associative learning in schizophrenia: behavioral and computational studies
Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.
2008-01-01
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486
The fiber walk: a model of tip-driven growth with lateral expansion.
Bucksch, Alexander; Turk, Greg; Weitz, Joshua S
2014-01-01
Tip-driven growth processes underlie the development of many plants. To date, tip-driven growth processes have been modeled as an elongating path or series of segments, without taking into account lateral expansion during elongation. Instead, models of growth often introduce an explicit thickness by expanding the area around the completed elongated path. Modeling expansion in this way can lead to contradictions in the physical plausibility of the resulting surface and to uncertainty about how the object reached certain regions of space. Here, we introduce fiber walks as a self-avoiding random walk model for tip-driven growth processes that includes lateral expansion. In 2D, the fiber walk takes place on a square lattice and the space occupied by the fiber is modeled as a lateral contraction of the lattice. This contraction influences the possible subsequent steps of the fiber walk. The boundary of the area consumed by the contraction is derived as the dual of the lattice faces adjacent to the fiber. We show that fiber walks generate fibers that have well-defined curvatures, and thus enable the identification of the process underlying the occupancy of physical space. Hence, fiber walks provide a base from which to model both the extension and expansion of physical biological objects with finite thickness.
The Fiber Walk: A Model of Tip-Driven Growth with Lateral Expansion
Bucksch, Alexander; Turk, Greg; Weitz, Joshua S.
2014-01-01
Tip-driven growth processes underlie the development of many plants. To date, tip-driven growth processes have been modeled as an elongating path or series of segments, without taking into account lateral expansion during elongation. Instead, models of growth often introduce an explicit thickness by expanding the area around the completed elongated path. Modeling expansion in this way can lead to contradictions in the physical plausibility of the resulting surface and to uncertainty about how the object reached certain regions of space. Here, we introduce fiber walks as a self-avoiding random walk model for tip-driven growth processes that includes lateral expansion. In 2D, the fiber walk takes place on a square lattice and the space occupied by the fiber is modeled as a lateral contraction of the lattice. This contraction influences the possible subsequent steps of the fiber walk. The boundary of the area consumed by the contraction is derived as the dual of the lattice faces adjacent to the fiber. We show that fiber walks generate fibers that have well-defined curvatures, and thus enable the identification of the process underlying the occupancy of physical space. Hence, fiber walks provide a base from which to model both the extension and expansion of physical biological objects with finite thickness. PMID:24465607
Microstructure-based hyperelastic models for closed-cell solids
Wyatt, Hayley
2017-01-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson’s ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases. PMID:28484340
Microstructure-based hyperelastic models for closed-cell solids.
Mihai, L Angela; Wyatt, Hayley; Goriely, Alain
2017-04-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson's ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases.
Microstructure-based hyperelastic models for closed-cell solids
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Wyatt, Hayley; Goriely, Alain
2017-04-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson's ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
Natural Hazards Risk Reduction and the ARkStorm Scenario
NASA Astrophysics Data System (ADS)
Cox, D. A.; Dettinger, M. D.; Ralph, F. M.
2016-12-01
The ARkStorm Scenario project began in 2008, led by the USGS Multi-Hazards Demonstration Project (now Science Application for Risk Reduction) in an effort to innovate the application of science to reduce natural-hazard risk associated with large atmospheric-river (AR) storms on the West Coast of the US. The effort involved contributions from many federal, state and academic organizations including NOAA's Environmental Systems Laboratory. The ARkStorm project used new understanding of atmospheric river physics, combined with downscaled meteorological data from two recent ARs (in 1969 and 1986), to describe and model a prolonged sequence of back-to-back storms similar to those that bankrupted California in 1862. With this scientifically plausible (but not worst-case) scenario, the ARkStorm team engaged flood and levee experts to identify plausible flooding extents and durations, created a coastal-storm inundation model (CoSMoS), and California's first landslide susceptibility map, to better understand secondary meteorological and geophysical hazards (flood, wind, landslide, coastal erosion and inundation) across California. Physical damages to homes, infrastructure, agriculture, and the environment were then estimated to calculate the likely social and economic impact to California and the nation. Across California, property damage from the ARkStorm scenario was estimated to exceed 300 billion, mostly from flooding. Including damage and losses, lifeline damages and business interruptions, the total cost of an ARkStorm-sized series of storms came to nearly 725 billion, nearly three times the losses estimated from another SAFRR scenario describing a M7.8 earthquake in southern California. Thus, atmospheric rivers have the potential to be California's other "Big One." Since its creation, the ARkStorm scenario has been used in preparedness exercises by NASA, the US Navy, the State of California, the County of Ventura, and cities and counties in the Tahoe Basin and downstream into Nevada. These efforts have examined how large AR events could plausibly impact many aspects of society and environment, and how to avoid the worst of the disaster outcomes. The ARkStorm scenario will next be used in a climate extremes scenario for the U.S. Southwest.
Günther, Fritz; Marelli, Marco
2016-01-01
Noun compounds, consisting of two nouns (the head and the modifier) that are combined into a single concept, differ in terms of their plausibility: school bus is a more plausible compound than saddle olive. The present study investigates which factors influence the plausibility of attested and novel noun compounds. Distributional Semantic Models (DSMs) are used to obtain formal (vector) representations of word meanings, and compositional methods in DSMs are employed to obtain such representations for noun compounds. From these representations, different plausibility measures are computed. Three of those measures contribute in predicting the plausibility of noun compounds: The relatedness between the meaning of the head noun and the compound (Head Proximity), the relatedness between the meaning of modifier noun and the compound (Modifier Proximity), and the similarity between the head noun and the modifier noun (Constituent Similarity). We find non-linear interactions between Head Proximity and Modifier Proximity, as well as between Modifier Proximity and Constituent Similarity. Furthermore, Constituent Similarity interacts non-linearly with the familiarity with the compound. These results suggest that a compound is perceived as more plausible if it can be categorized as an instance of the category denoted by the head noun, if the contribution of the modifier to the compound meaning is clear but not redundant, and if the constituents are sufficiently similar in cases where this contribution is not clear. Furthermore, compounds are perceived to be more plausible if they are more familiar, but mostly for cases where the relation between the constituents is less clear. PMID:27732599
NASA Astrophysics Data System (ADS)
Lombardi, D.
2011-12-01
Plausibility judgments-although well represented in conceptual change theories (see, for example, Chi, 2005; diSessa, 1993; Dole & Sinatra, 1998; Posner et al., 1982)-have received little empirical attention until our recent work investigating teachers' and students' understanding of and perceptions about human-induced climate change (Lombardi & Sinatra, 2010, 2011). In our first study with undergraduate students, we found that greater plausibility perceptions of human-induced climate accounted for significantly greater understanding of weather and climate distinctions after instruction, even after accounting for students' prior knowledge (Lombardi & Sinatra, 2010). In a follow-up study with inservice science and preservice elementary teachers, we showed that anger about the topic of climate change and teaching about climate change was significantly related to implausible perceptions about human-induced climate change (Lombardi & Sinatra, 2011). Results from our recent studies helped to inform our development of a model of the role of plausibility judgments in conceptual change situations. The model applies to situations involving cognitive dissonance, where background knowledge conflicts with an incoming message. In such situations, we define plausibility as a judgment on the relative potential truthfulness of incoming information compared to one's existing mental representations (Rescher, 1976). Students may not consciously think when making plausibility judgments, expending only minimal mental effort in what is referred to as an automatic cognitive process (Stanovich, 2009). However, well-designed instruction could facilitate students' reappraisal of plausibility judgments in more effortful and conscious cognitive processing. Critical evaluation specifically may be one effective method to promote plausibility reappraisal in a classroom setting (Lombardi & Sinatra, in progress). In science education, critical evaluation involves the analysis of how evidentiary data support a hypothesis and its alternatives. The presentation will focus on how instruction promoting critical evaluation can encourage individuals to reappraise their plausibility judgments and initiate knowledge reconstruction. In a recent pilot study, teachers experienced an instructional scaffold promoting critical evaluation of two competing climate change theories (i.e., human-induced and increasing solar irradiance) and significantly changed both their plausibility judgments and perceptions of correctness toward the scientifically-accepted model of human-induced climate change. A comparison group of teachers who did not experience the critical evaluation activity showed no significant change. The implications of these studies for future research and instruction will be discussed in the presentation, including effective ways to increase students' and teachers' ability to be critically evaluative and reappraise their plausibility judgments. With controversial science issues, such as climate change, such abilities may be necessary to facilitate conceptual change.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
Mira variables: An informal review
NASA Technical Reports Server (NTRS)
Wing, R. F.
1980-01-01
The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.
Spike train generation and current-to-frequency conversion in silicon diodes
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A device physics model is developed to analyze spontaneous neuron-like spike train generation in current driven silicon p(+)-n-n(+) devices in cryogenic environments. The model is shown to explain the very high dynamic range (0 to the 7th) current-to-frequency conversion and experimental features of the spike train frequency as a function of input current. The devices are interesting components for implementation of parallel asynchronous processing adjacent to cryogenically cooled focal planes because of their extremely low current and power requirements, their electronic simplicity, and their pulse coding capability, and could be used to form the hardware basis for neural networks which employ biologically plausible means of information coding.
Studies of Martian polar regions. [using CO2 flow
NASA Technical Reports Server (NTRS)
Smith, C. I.; Clark, B. R.; Eschman, D. F.
1974-01-01
The flow law determined experimentally for solid CO2 establishes that an hypothesis of glacial flow of CO2 at the Martian poles is not physically unrealistic. Compression experiments carried out under 1 atmosphere pressure and constant strain rate conditions demonstrate that the strength of CO2 near its sublimation point is considerably less than the strength of water ice near its melting point. A plausible glacial model for the Martian polar caps was constructed. The CO2 deposited near the pole would have flowed outward laterally to relieve high internal shear stresses. The topography of the polar caps, and the uniform layering and general extent of the layered deposits were explained using this model.
Loucks, Eric B; Schuman-Olivier, Zev; Britton, Willoughby B; Fresco, David M; Desbordes, Gaelle; Brewer, Judson A; Fulwiler, Carl
2015-12-01
The purpose of this review is to provide (1) a synopsis on relations of mindfulness with cardiovascular disease (CVD) and major CVD risk factors, and (2) an initial consensus-based overview of mechanisms and theoretical framework by which mindfulness might influence CVD. Initial evidence, often of limited methodological quality, suggests possible impacts of mindfulness on CVD risk factors including physical activity, smoking, diet, obesity, blood pressure, and diabetes regulation. Plausible mechanisms include (1) improved attention control (e.g., ability to hold attention on experiences related to CVD risk, such as smoking, diet, physical activity, and medication adherence), (2) emotion regulation (e.g., improved stress response, self-efficacy, and skills to manage craving for cigarettes, palatable foods, and sedentary activities), and (3) self-awareness (e.g., self-referential processing and awareness of physical sensations due to CVD risk factors). Understanding mechanisms and theoretical framework should improve etiologic knowledge, providing customized mindfulness intervention targets that could enable greater mindfulness intervention efficacy.
Schuman-Olivier, Zev; Britton, Willoughby B.; Fresco, David M.; Desbordes, Gaelle; Brewer, Judson A.; Fulwiler, Carl
2016-01-01
The purpose of this review is to provide (1) a synopsis on relations of mindfulness with cardiovascular disease (CVD) and major CVD risk factors, and (2) an initial consensus-based overview of mechanisms and theoretical framework by which mindfulness might influence CVD. Initial evidence, often of limited methodological quality, suggests possible impacts of mindfulness on CVD risk factors including physical activity, smoking, diet, obesity, blood pressure, and diabetes regulation. Plausible mechanisms include (1) improved attention control (e.g., ability to hold attention on experiences related to CVD risk, such as smoking, diet, physical activity, and medication adherence), (2) emotion regulation (e.g., improved stress response, self-efficacy, and skills to manage craving for cigarettes, palatable foods, and sedentary activities), and (3) self-awareness (e.g., self-referential processing and awareness of physical sensations due to CVD risk factors). Understanding mechanisms and theoretical framework should improve etiologic knowledge, providing customized mindfulness intervention targets that could enable greater mindfulness intervention efficacy. PMID:26482755
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
ERIC Educational Resources Information Center
Lombardi, Doug; Bickel, Elliot S.; Bailey, Janelle M.; Burrell, Shondricka
2018-01-01
Evaluation is an important aspect of science and is receiving increasing attention in science education. The present study investigated (1) changes to plausibility judgments and knowledge as a result of a series of instructional scaffolds, called model-evidence link activities, that facilitated evaluation of scientific and alternative models in…
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
Formulating physical processes in a full-range model of soil water retention
NASA Astrophysics Data System (ADS)
Nimmo, J. R.
2016-12-01
Currently-used water retention models vary in how much their formulas correspond to controlling physical processes such as capillarity, adsorption, and air-trapping. In model development, realistic correspondence to physical processes has often been a lower priority than ease of use and compatibility with other models. For example, the wettest range is normally represented simplistically, as by a straight line of zero slope, or by default using the same formulation as for the middle range. The new model presented here recognizes dominant processes within three segments of the range from oven-dryness to saturation. The adsorption-dominated dry range is represented by a logarithmic relation used in earlier models. The middle range of capillary advance/retreat and Haines jumps is represented by a new adaptation of the lognormal distribution function. In the wet range, the expansion of trapped air in response to matric pressure change is important because (1) it displaces water, and (2) it triggers additional volume-adjusting processes such as the collapse of liquid bridges between air pockets. For this range, the model incorporates the Boyles' law inverse-proportionality of trapped air volume and pressure, amplified by an empirical factor to account for the additional processes. With their basis in processes, the model's parameters have a strong physical interpretation, and in many cases can be assigned values from knowledge of fundamental relationships or individual measurements. An advantage of the physically-plausible treatment of the wet range is that it avoids such problems as the blowing-up of derivatives on approach to saturation, enhancing the model's utility for important but challenging wet-range phenomena such as domain exchange between preferential flow paths and soil matrix. Further development might be able to accommodate hysteresis by a systematic adjustment of the relation between the wet and middle ranges.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Brazier, John E.; Rowen, Donna; Barkham, Michael
2013-01-01
Background. The Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE-OM) is used to evaluate the effectiveness of psychological therapies in people with common mental disorders. The objective of this study was to estimate a preference-based index for this population using CORE-6D, a health state classification system derived from the CORE-OM consisting of a 5-item emotional component and a physical item, and to demonstrate a novel method for generating states that are not orthogonal. Methods. Rasch analysis was used to identify 11 emotional health states from CORE-6D that were frequently observed in the study population and are, thus, plausible (in contrast, conventional statistical design might generate implausible states). Combined with the 3 response levels of the physical item of CORE-6D, they generate 33 plausible health states, 18 of which were selected for valuation. A valuation survey of 220 members of the public in South Yorkshire, United Kingdom, was undertaken using the time tradeoff (TTO) method. Regression analysis was subsequently used to predict values for all possible states described by CORE-6D. Results. A number of multivariate regression models were built to predict values for the 33 health states of CORE-6D, using the Rasch logit value of the emotional state and the response level of the physical item as independent variables. A cubic model with high predictive value (adjusted R2 = 0.990) was selected to predict TTO values for all 729 CORE-6D health states. Conclusion. The CORE-6D preference-based index will enable the assessment of cost-effectiveness of interventions for people with common mental disorders using existing and prospective CORE-OM data sets. The new method for generating states may be useful for other instruments with highly correlated dimensions. PMID:23178639
Model-based recovery of histological parameters from multispectral images of the colon
NASA Astrophysics Data System (ADS)
Hidovic-Rowe, Dzena; Claridge, Ela
2005-04-01
Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.
Biologically Plausible, Human-scale Knowledge Representation
ERIC Educational Resources Information Center
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-01-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that…
Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less
Fermion number of twisted kinks in the NJL2 model revisited
NASA Astrophysics Data System (ADS)
Thies, Michael
2018-03-01
As a consequence of axial current conservation, fermions cannot be bound in localized lumps in the massless Nambu-Jona-Lasinio model. In the case of twisted kinks, this manifests itself in a cancellation between the valence fermion density and the fermion density induced in the Dirac sea. To attribute the correct fermion number to these bound states requires an infrared regularization. Recently, this has been achieved by introducing a bare fermion mass, at least in the nonrelativistic regime of small twist angles and fermion numbers. Here, we propose a simpler regularization using a finite box which preserves integrability and can be applied at any twist angle. A consistent and physically plausible assignment of fermion number to all twisted kinks emerges.
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.
2010-10-01
We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Combustion of hydrogen injected into a supersonic airstream (the SHIP computer program)
NASA Technical Reports Server (NTRS)
Markatos, N. C.; Spalding, D. B.; Tatchell, D. G.
1977-01-01
The mathematical and physical basis of the SHIP computer program which embodies a finite-difference, implicit numerical procedure for the computation of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction is described. The physical hypotheses built into the program include: a two-equation turbulence model, and a chemical equilibrium model for the hydrogen-oxygen reaction. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. The computer time required for a given case is approximately 1 minute on a CDC 7600 machine. A discussion of the assumption of parabolic flow in the injection region is given which suggests that improvement in calculation in this region could be obtained by use of the partially parabolic procedure of Pratap and Spalding. It is concluded that the technique described herein provides the basis for an efficient and reliable means for predicting the effects of hydrogen injection into supersonic airstreams and of its subsequent combustion.
NASA Astrophysics Data System (ADS)
Phillips, Alfred, Jr.
Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .
South, Susan C.; Hamdi, Nayla; Krueger, Robert F.
2015-01-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative gene × environment interaction (G×E) models not only have the potential to elucidate when genetic and environmental influences on a phenotype might differ, but why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology—diathesis-stress, bioecological, differential susceptibility, and social control. In the current manuscript, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically-informative plausible accounts of how phenotypes related to social inequality—physical health and cognition—might relate to these theoretical models. PMID:26426103
South, Susan C; Hamdi, Nayla R; Krueger, Robert F
2017-02-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative Gene × Environment interaction models have the potential to elucidate not only when genetic and environmental influences on a phenotype might differ, but also why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology-diathesis-stress, bioecological, differential susceptibility, and social control. In the current article, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically informative plausible accounts of how phenotypes related to social inequality-physical health and cognition-might relate to these theoretical models. © 2015 Wiley Periodicals, Inc.
A Tissue Propagation Model for Validating Close-Proximity Biomedical Radiometer Measurements
NASA Technical Reports Server (NTRS)
Bonds, Q.; Herzig, P.; Weller, T.
2016-01-01
The propagation of thermally-generated electromagnetic emissions through stratified human tissue is studied herein using a non-coherent mathematical model. The model is developed to complement subsurface body temperature measurements performed using a close proximity microwave radiometer. The model takes into account losses and reflections as thermal emissions propagate through the body, before being emitted at the skin surface. The derivation is presented in four stages and applied to the human core phantom, a physical representation of a stomach volume of skin, muscle, and blood-fatty tissue. A drop in core body temperature is simulated via the human core phantom and the response of the propagation model is correlated to the radiometric measurement. The results are comparable, with differences on the order of 1.5 - 3%. Hence the plausibility of core body temperature extraction via close proximity radiometry is demonstrated, given that the electromagnetic characteristics of the stratified tissue layers are known.
McNeer, Richard R; Bennett, Christopher L; Dudaryk, Roman
2016-02-01
Operating rooms are identified as being one of the noisiest of clinical environments, and intraoperative noise is associated with adverse effects on staff and patient safety. Simulation-based experiments would offer controllable and safe venues for investigating this noise problem. However, realistic simulation of the clinical auditory environment is rare in current simulators. Therefore, we retrofitted our operating room simulator to be able to produce immersive auditory simulations with the use of typical sound sources encountered during surgeries. Then, we tested the hypothesis that anesthesia residents would perceive greater task load and fatigue while being given simulated lunch breaks in noisy environments rather than in quiet ones. As a secondary objective, we proposed and tested the plausibility of a novel psychometric instrument for the assessment of stress. In this simulation-based, randomized, repeated-measures, crossover study, 2 validated psychometric survey instruments, the NASA Task Load Index (NASA-TLX), composed of 6 items, and the Swedish Occupational Fatigue Inventory (SOFI), composed of 5 items, were used to assess perceived task load and fatigue, respectively, in first-year anesthesia residents. Residents completed the psychometric instruments after being given lunch breaks in quiet and noisy intraoperative environments (soundscapes). The effects of soundscape grouping on the psychometric instruments and their comprising items were analyzed with a split-plot analysis. A model for a new psychometric instrument for measuring stress that combines the NASA-TLX and SOFI instruments was proposed, and a factor analysis was performed on the collected data to determine the model's plausibility. Twenty residents participated in this study. Multivariate analysis of variance showed an effect of soundscape grouping on the combined NASA-TLX and SOFI instrument items (P = 0.003) and the comparisons of univariate item reached significance for the NASA Temporal Demand item (P = 0.0004) and the SOFI Lack of Energy item (P = 0.001). Factor analysis extracted 4 factors, which were assigned the following construct names for model development: Psychological Task Load, Psychological Fatigue, Acute Physical Load, and Performance-Chronic Physical Load. Six of the 7 fit tests used in the partial confirmatory factor analysis were positive when we fitted the data to the proposed model, suggesting that further validation is warranted. This study provides evidence that noise during surgery can increase feelings of stress, as measured by perceived task load and fatigue levels, in anesthesiologists and adds to the growing literature pointing to an overall adverse impact of clinical noise on caregivers and patient safety. The psychometric model proposed in this study for assessing perceived stress is plausible based on factor analysis and will be useful for characterizing the impact of the clinical environment on subject stress levels in future investigations.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
New Insights into Auroral Particle Acceleration via Coordinated Optical-Radar Networks
NASA Astrophysics Data System (ADS)
Hirsch, M.
2016-12-01
The efficacy of instruments synthesized from heterogeneous sensor networks is increasingly being realized in fielded science observation systems. New insights into the finest spatio-temporal scales of ground-observable ionospheric physics are realized by coupling low-level data from fixed legacy instruments with mobile and portable sensors. In particular, turbulent ionospheric events give enhanced radar returns more than three orders of magnitude larger than typical incoherent plasma observations. Radar integration times for the Poker Flat Incoherent Scatter Radar (PFISR) can thereby be shrunk from order 100 second integration time down to order 100 millisecond integration time for the ion line. Auroral optical observations with 20 millisecond cadence synchronized in absolute time with the radar help uncover plausible particle acceleration processes for the highly dynamic aurora often associated with Langmuir turbulence. Quantitative analysis of coherent radar returns combined with a physics-based model yielding optical volume emission rate profiles vs. differential number flux input of precipitating particles into the ionosphere yield plausibility estimates for a particular auroral acceleration process type. Tabulated results from a survey of auroral events where the Boston University High Speed Auroral Tomography system operated simultaneously with PFISR are presented. Context is given to the narrow-field HiST observations by the Poker Flat Digital All-Sky Camera and THEMIS GBO ASI network. Recent advances in high-rate (order 100 millisecond) plasma line ISR observations (100x improvement in temporal resolution) will contribute to future coordinated observations. ISR beam pattern and pulse parameter configurations favorable for future coordinated optical-ISR experiments are proposed in light of recent research uncovering the criticality of aspect angle to ISR-observable physics. High-rate scientist-developed GPS TEC receivers are expected to contribute additional high resolution observations to such experiments.
Barberio, Amanda; McLaren, Lindsay
2011-01-01
The behavioural and socio-cultural processes underlying the association between socio-economic position (SEP) and body mass index (BMI) remain unclear. Occupational physical activity (OPA) is one plausible explanatory variable that has not been previously considered. 1) To examine the association between OPA and BMI, and 2) to examine whether OPA mediates the SEP-BMI association, in a Canadian population-based sample. This cross-sectional study was based on secondary analysis of the 2008 Canadian Community Health Survey data, focusing on adults (age 25-64) working at a job or business (men, n = 1,036; women, n = 936). BMI was based on measured height and weight and we derived a novel indicator of OPA from the National Occupational Classification Career Handbook. Our analytic technique was ordinary least squares regression, adjusting for a range of socio-demographic, health and behavioural covariates. OPA was marginally associated with BMI in women, such that women with medium levels of OPA tended to be lighter than women with low levels of OPA, in adjusted models. No associations between OPA and BMI were detected for males. Baron and Kenny's (1986) three conditions for testing mediation were not satisfied, and thus we were unable to proceed with testing OPA as a mediator. Notwithstanding the small effects observed in women, overall the associations between OPA and BMI were neither clear nor strong, which could reflect conceptual and/or methodological reasons. Future research on this topic might incorporate other plausible explanatory variables (e.g., job-related psychosocial stress) and adopt a prospective design.
Workshop Report on Virtual Worlds and Immersive Environments
NASA Technical Reports Server (NTRS)
Langhoff, Stephanie R.; Cowan-Sharp, Jessy; Dodson, Karen E.; Damer, Bruce; Ketner, Bob
2009-01-01
The workshop revolved around three framing ideas or scenarios about the evolution of virtual environments: 1. Remote exploration: The ability to create high fidelity environments rendered from external data or models such that exploration, design and analysis that is truly interoperable with the physical world can take place within them. 2. We all get to go: The ability to engage anyone in being a part of or contributing to an experience (such as a space mission), no matter their training or location. It is the creation of a new paradigm for education, outreach, and the conduct of science in society that is truly participatory. 3. Become the data: A vision of a future where boundaries between the physical and the virtual have ceased to be meaningful. What would this future look like? Is this plausible? Is it desirable? Why and why not?
Simulating human behavior for national security human interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis; Hart, Dereck H.; Verzi, Stephen J.
2007-01-01
This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humansmore » were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.« less
Quantifying the physical, social and attitudinal environment of children with cerebral palsy.
Dickinson, Heather O; Colver, Allan
2011-01-01
To develop an instrument to represent the availability of needed environmental features (EFs) in the physical, social and attitudinal environment of home, school and community for children with cerebral palsy. Following a literature review and qualitative studies, the European Child Environment Questionnaire (ECEQ) was developed to capture whether EFs needed by children with cerebral palsy were available to them: 24, 24 and 12 items related to the physical, social and attitudinal environments, respectively. The ECEQ was administered to parents of 818 children with cerebral palsy aged 8-12 years, in seven European countries. A domain structure was developed using factor analysis. Parents responded to 98% of items. Seven items were omitted from statistical models as the EFs they referred to were available to most children who needed them; two items were omitted as they did not fit well into plausible domains. The final domains, based on 51 items, were: Transport, Physical - home, Physical - community, Physical - school, Social support - home, Social support - community, Attitudes - family and friends, Attitudes - teachers and therapists, Attitudes - classmates. ECEQ was acceptable to parents and can be used to assess both the access children with cerebral palsy have to the EFs that they need and how available individual EFs are.
Effects on the CMB from compactification before inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontou, Eleni-Alexandra; Blanco-Pillado, Jose J.; Hertzberg, Mark P.
2017-04-01
Many theories beyond the Standard Model include extra dimensions, though these have yet to be directly observed. In this work we consider the possibility of a compactification mechanism which both allows extra dimensions and is compatible with current observations. This compactification is predicted to leave a signature on the CMB by altering the amplitude of the low l multipoles, dependent on the amount of inflation. Recently discovered CMB anomalies at low multipoles may be evidence for this. In our model we assume the spacetime is the product of a four-dimensional spacetime and flat extra dimensions. Before the compactification, both themore » four-dimensional spacetime and the extra dimensions can either be expanding or contracting independently. Taking into account physical constraints, we explore the observational consequences and the plausibility of these different models.« less
Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
Matott, L Shawn; Jiang, Zhengzheng; Rabideau, Alan J; Allen-King, Richelle M
2015-01-01
Numerous isotherm expressions have been developed for describing sorption of hydrophobic organic compounds (HOCs), including "dual-mode" approaches that combine nonlinear behavior with a linear partitioning component. Choosing among these alternative expressions for describing a given dataset is an important task that can significantly influence subsequent transport modeling and/or mechanistic interpretation. In this study, a series of numerical experiments were undertaken to identify "best-in-class" isotherms by refitting 10 alternative models to a suite of 13 previously published literature datasets. The corrected Akaike Information Criterion (AICc) was used for ranking these alternative fits and distinguishing between plausible and implausible isotherms for each dataset. The occurrence of multiple plausible isotherms was inversely correlated with dataset "richness", such that datasets with fewer observations and/or a narrow range of aqueous concentrations resulted in a greater number of plausible isotherms. Overall, only the Polanyi-partition dual-mode isotherm was classified as "plausible" across all 13 of the considered datasets, indicating substantial statistical support consistent with current advances in sorption theory. However, these findings are predicated on the use of the AICc measure as an unbiased ranking metric and the adoption of a subjective, but defensible, threshold for separating plausible and implausible isotherms. Copyright © 2015 Elsevier B.V. All rights reserved.
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Anticipatory Cognitive Systems: a Theoretical Model
NASA Astrophysics Data System (ADS)
Terenzi, Graziano
This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.
Reconstruction of the static magnetic field of a magnetron
NASA Astrophysics Data System (ADS)
Krüger, Dennis; Köhn, Kevin; Gallian, Sara; Brinkmann, Ralf Peter
2018-06-01
The simulation of magnetron discharges requires a quantitatively correct mathematical model of the magnetic field structure. This study presents a method to construct such a model on the basis of a spatially restricted set of experimental data and a plausible a priori assumption on the magnetic field configuration. The example in focus is that of a planar circular magnetron. The experimental data are Hall probe measurements of the magnetic flux density in an accessible region above the magnetron plane [P. D. Machura et al., Plasma Sources Sci. Technol. 23, 065043 (2014)]. The a priori assumption reflects the actual design of the device, and it takes the magnetic field emerging from a center magnet of strength m C and vertical position d C and a ring magnet of strength m R , vertical position d R , and radius R. An analytical representation of the assumed field configuration can be formulated in terms of generalized hypergeometric functions. Fitting the ansatz to the experimental data with a least square method results in a fully specified analytical field model that agrees well with the data inside the accessible region and, moreover, is physically plausible in the regions outside of it. The outcome proves superior to the result of an alternative approach which starts from a multimode solution of the vacuum field problem formulated in terms of polar Bessel functions and vertical exponentials. As a first application of the obtained field model, typical electron and ion Larmor radii and the gradient and curvature drift velocities of the electron guiding center are calculated.
n-dimensional isotropic Finch-Skea stars
NASA Astrophysics Data System (ADS)
Chilambwe, Brian; Hansraj, Sudan
2015-02-01
We study the impact of dimension on the physical properties of the Finch-Skea astrophysical model. It is shown that a positive definite, monotonically decreasing pressure and density are evident. A decrease in stellar radius emerges as the order of the dimension increases. This is accompanied by a corresponding increase in energy density. The model continues to display the necessary qualitative features inherent in the 4-dimensional Finch-Skea star and the conformity to the Walecka theory is preserved under dimensional increase. The causality condition is always satisfied for all dimensions considered resulting in the proposed models demonstrating a subluminal sound speed throughout the interior of the distribution. Moreover, the pressure and density decrease monotonically outwards from the centre and a pressure-free hypersurface exists demarcating the boundary of the perfect-fluid sphere. Since the study of the physical conditions is performed graphically, it is necessary to specify certain constants in the model. Reasonable values for such constants are arrived at on examining the behaviour of the model at the centre and demanding the satisfaction of all elementary conditions for physical plausibility. Finally two constants of integration are settled on matching of our solutions with the appropriate Schwarzschild-Tangherlini exterior metrics. Furthermore, the solution admits a barotropic equation of state despite the higher dimension. The compactification parameter as well as the density variation parameter are also computed. The models satisfy the weak, strong and dominant energy conditions in the interior of the stellar configuration.
Triton's surface-atmosphere energy balance
Stansberry, J.A.; Yelle, R.V.; Lunine, J.I.; McEwen, A.S.
1992-01-01
We explore the energetics of Triton's surface-atmosphere system using a model that includes the turbulent transfer of sensible heat as well as insolation, reradiation, and latent heat transport. The model relies on a 1?? by 1?? resolution hemispheric bolometric albedo map of Triton for determining the atmospheric temperature, the N2 frost emissivity, and the temperatures of unfrosted portions of the surface consistent with a frost temperature of ???38 K. For a physically plausible range of heat transfer coefficients, we find that the atmospheric temperature roughly 1 km above the surface is approximately 1 to 3 K hotter than the surface. Atmospheric temperatures of 48 K suggested by early analysis of radio occultation data cannot be obtained for plausible values of the heat transfer coefficients. Our calculations indicate that Triton's N2 frosts must have an emissivity well below unity in order to have a temperature of ???38 K, consistent with previous results. We also find that convection over small hot spots does not significantly cool them off, so they may be able to act as continous sources of buoyancy for convective plumes, but have not explored whether the convection is vigorous enough to entrain particulate matter thereby forming a dust devil. Our elevated atmospheric temperatures make geyser driven plumes with initial upward velocities ???10 m s-1 stagnate in the lower atmosphere. These "wimpy" plumes provide a possible explanation for Triton's "wind streaks.". ?? 1992.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
NASA Astrophysics Data System (ADS)
Pyt'ev, Yu. P.
2018-01-01
mathematical formalism for subjective modeling, based on modelling of uncertainty, reflecting unreliability of subjective information and fuzziness that is common for its content. The model of subjective judgments on values of an unknown parameter x ∈ X of the model M( x) of a research object is defined by the researcher-modeler as a space1 ( X, p( X), P{I^{\\bar x}}, Be{l^{\\bar x}}) with plausibility P{I^{\\bar x}} and believability Be{l^{\\bar x}} measures, where x is an uncertain element taking values in X that models researcher—modeler's uncertain propositions about an unknown x ∈ X, measures P{I^{\\bar x}}, Be{l^{\\bar x}} model modalities of a researcher-modeler's subjective judgments on the validity of each x ∈ X: the value of P{I^{\\bar x}}(\\tilde x = x) determines how relatively plausible, in his opinion, the equality (\\tilde x = x) is, while the value of Be{l^{\\bar x}}(\\tilde x = x) determines how the inequality (\\tilde x = x) should be relatively believed in. Versions of plausibility Pl and believability Bel measures and pl- and bel-integrals that inherit some traits of probabilities, psychophysics and take into account interests of researcher-modeler groups are considered. It is shown that the mathematical formalism of subjective modeling, unlike "standard" mathematical modeling, •enables a researcher-modeler to model both precise formalized knowledge and non-formalized unreliable knowledge, from complete ignorance to precise knowledge of the model of a research object, to calculate relative plausibilities and believabilities of any features of a research object that are specified by its subjective model M(\\tilde x), and if the data on observations of a research object is available, then it: •enables him to estimate the adequacy of subjective model to the research objective, to correct it by combining subjective ideas and the observation data after testing their consistency, and, finally, to empirically recover the model of a research object.
NASA Astrophysics Data System (ADS)
Shi, Bingren
2010-10-01
The tokamak pedestal density structure is generally studied using a diffusion-dominant model. Recent investigations (Stacey and Groebner 2009 Phys. Plasmas 16 102504) from first principle based physics have shown a plausible existence of large inward convection in the pedestal region. The diffusion-convection equation with rapidly varying convection and diffusion coefficients in the near edge region and model puffing-recycling neutral particles is studied in this paper. A peculiar property of its solution for the existence of the large convection case is that the pedestal width of the density profile, qualitatively different from the diffusion-dominant case, depends mainly on the width of the inward convection and only weakly on the neutral penetration length and its injection position.
A brief overview of compartmental modeling for intake of plutonium via wounds
Poudel, Deepesh; Klumpp, John Allan; Waters, Tom L.; ...
2017-06-07
Here, the aim of this study is to present several approaches that have been used to model the behavior of radioactive materials (specifically Pu) in contaminated wounds. We also review some attempts by the health physics community to validate and revise the National Council on Radiation Protection and Measurements (NCRP) 156 biokinetic model for wounds, and present some general recommendations based on the review. Modeling of intake via the wound pathway is complicated because of a large array of wound characteristics (e.g. solubility and chemistry of the material, type and depth of the tissue injury, anatomical location of injury). Moreover,more » because a majority of the documented wound cases in humans are medically treated (excised or treated with chelation), the data to develop biokinetic models for unperturbed wound exposures are limited. Since the NCRP wound model was largely developed from animal data, it is important to continue to validate and improve the model using human data whenever plausible.« less
Exploring the entanglement of personal epistemologies and emotions in students' thinking
NASA Astrophysics Data System (ADS)
Gupta, Ayush; Elby, Andrew; Danielak, Brian A.
2018-06-01
Evidence from psychology, cognitive science, and neuroscience suggests that cognition and emotions are coupled. Education researchers have also documented correlations between emotions (such as joy, anxiety, fear, curiosity, boredom) and academic performance. Nonetheless, most research on students' reasoning and conceptual change within the learning sciences and physics and science education research has not attended to the role of learners' emotions in describing or modeling the fine timescale dynamics of their conceptual reasoning. The few studies that integrate emotions into models of learners' cognition have mostly done so at a coarse grain size. In this study, toward the long-term goal of incorporating emotions into models of in-the-moment cognitive dynamics, we present a case study of Judy, an undergraduate electrical engineering and physics major. We show that shifts in the intensity of a fine-grained aspect of Judy's emotions, her annoyance at conceptual homework problems, co-occur with shifts in her epistemological stance toward differentiating knowledge about and the practical utility of real circuits and idealized circuit models. We then argue for the plausibility of a cognitive model in which Judy's emotions and epistemological stances mutually affect each other. We end with discussions on how models of learners' cognition that incorporate their emotions are generative for instructional purposes and research on learning.
NASA Astrophysics Data System (ADS)
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2015-04-01
Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Families of Plausible Solutions to the Puzzle of Boyajian’s Star
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Sigurdsson, Steinn
2016-09-01
Good explanations for the unusual light curve of Boyajian's Star have been hard to find. Recent results by Montet & Simon lend strength and plausibility to the conclusion of Schaefer that in addition to short-term dimmings, the star also experiences large, secular decreases in brightness on decadal timescales. This, combined with a lack of long-wavelength excess in the star's spectral energy distribution, strongly constrains scenarios involving circumstellar material, including hypotheses invoking a spherical cloud of artifacts. We show that the timings of the deepest dimmings appear consistent with being randomly distributed, and that the star's reddening and narrow sodium absorption is consistent with the total, long-term dimming observed. Following Montet & Simon's encouragement to generate alternative hypotheses, we attempt to circumscribe the space of possible explanations with a range of plausibilities, including: a cloud in the outer solar system, structure in the interstellar medium (ISM), natural and artificial material orbiting Boyajian's Star, an intervening object with a large disk, and variations in Boyajian's Star itself. We find the ISM and intervening disk models more plausible than the other natural models.
Esteghamati, Alireza; Zandieh, Ali; Khalilzadeh, Omid; Morteza, Afsaneh; Meysamie, Alipasha; Nakhjavani, Manouchehr; Gouya, Mohammad Mehdi
2010-10-01
Metabolic syndrome (MetS), manifested by insulin resistance, dyslipidemia, central obesity, and hypertension, is conceived to be associated with hyperleptinemia and physical activity. The aim of this study was to elucidate the factors underlying components of MetS and also to test the suitability of leptin and physical activity as additional components of this syndrome. Data of the individuals without history of diabetes mellitus, aged 25-64 years, from third national surveillance of risk factors of non-communicable diseases (SuRFNCD-2007), were analyzed. Performing factor analysis on waist circumference, homeostasis model assessment of insulin resistance, systolic blood pressure, triglycerides (TG) and high-density lipoprotein cholesterol (HDL-C) led to extraction of two factors which explained around 59.0% of the total variance in both genders. When TG and HDL-C were replaced by TG to HDL-C ratio, a single factor was obtained. In contrast to physical activity, addition of leptin was consistent with one-factor structure of MetS and improved the ability of suggested models to identify obesity (BMI≥30 kg/m2, P<0.01), using receiver-operator characteristics curve analysis. In general, physical activity loaded on the first identified factor. Our study shows that one underlying factor structure of MetS is also plausible and the inclusion of leptin does not interfere with this structure. Further, this study suggests that physical activity influences MetS components via modulation of the main underlying pathophysiologic pathway of this syndrome.
Understanding asteroid collisional history through experimental and numerical studies
NASA Technical Reports Server (NTRS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-01-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Understanding asteroid collisional history through experimental and numerical studies
NASA Astrophysics Data System (ADS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-06-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Evolution of sparsity and modularity in a model of protein allostery
NASA Astrophysics Data System (ADS)
Hemery, Mathieu; Rivoire, Olivier
2015-04-01
The sequence of a protein is not only constrained by its physical and biochemical properties under current selection, but also by features of its past evolutionary history. Understanding the extent and the form that these evolutionary constraints may take is important to interpret the information in protein sequences. To study this problem, we introduce a simple but physical model of protein evolution where selection targets allostery, the functional coupling of distal sites on protein surfaces. This model shows how the geometrical organization of couplings between amino acids within a protein structure can depend crucially on its evolutionary history. In particular, two scenarios are found to generate a spatial concentration of functional constraints: high mutation rates and fluctuating selective pressures. This second scenario offers a plausible explanation for the high tolerance of natural proteins to mutations and for the spatial organization of their least tolerant amino acids, as revealed by sequence analysis and mutagenesis experiments. It also implies a faculty to adapt to new selective pressures that is consistent with observations. The model illustrates how several independent functional modules may emerge within the same protein structure, depending on the nature of past environmental fluctuations. Our model thus relates the evolutionary history of proteins to the geometry of their functional constraints, with implications for decoding and engineering protein sequences.
NASA Astrophysics Data System (ADS)
Cohen, Ariel; Galili, Igal
2001-02-01
This paper discusses the origin of the concept of sky. It is shown to emerge from one's experience of the visual perception of nonluminous objects observed in/through the atmosphere during the daytime. The physical concept of visibility in the atmospheric environment underpins the perception of a flattened sky dome constructed by our mind. In addition, the Moon illusion receives a plausible explanation and both topics become appropriate for a conceptually oriented introduction course in physics and/or astronomy.
Black Hole Mergers as Probes of Structure Formation
NASA Technical Reports Server (NTRS)
Alicea-Munoz, E.; Miller, M. Coleman
2008-01-01
Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.
Space Science 2001: Some Problems with Artificial Gravity.
ERIC Educational Resources Information Center
Fisher, Nick
2001-01-01
Many pupils will be familiar with the ideas in "2001: A Space Odyssey" but few will have considered the physics involved. Simple calculations show that some of the effects depicted in the Space Station and on the Discovery are plausible but others would be impractical. (Author/ASK)
NASA Astrophysics Data System (ADS)
Lombardi, D.; Sinatra, G. M.
2013-12-01
Critical evaluation and plausibility reappraisal of scientific explanations have been underemphasized in many science classrooms (NRC, 2012). Deep science learning demands that students increase their ability to critically evaluate the quality of scientific knowledge, weigh alternative explanations, and explicitly reappraise their plausibility judgments. Therefore, this lack of instruction about critical evaluation and plausibility reappraisal has, in part, contributed to diminished understanding about complex and controversial topics, such as global climate change. The Model-Evidence Link (MEL) diagram (originally developed by researchers at Rutgers University under an NSF-supported project; Chinn & Buckland, 2012) is an instructional scaffold that promotes students to critically evaluate alternative explanations. We recently developed a climate change MEL and found that the students who used the MEL experienced a significant shift in their plausibility judgments toward the scientifically accepted model of human-induced climate change. Using the MEL for instruction also resulted in conceptual change about the causes of global warming that reflected greater understanding of fundamental scientific principles. Furthermore, students sustained this conceptual change six months after MEL instruction (Lombardi, Sinatra, & Nussbaum, 2013). This presentation will discuss recent educational research that supports use of the MEL to promote critical evaluation, plausibility reappraisal, and conceptual change, and also, how the MEL may be particularly effective for learning about global climate change and other socio-scientific topics. Such instruction to develop these fundamental thinking skills (e.g., critical evaluation and plausibility reappraisal) is demanded by both the Next Generation Science Standards (Achieve, 2013) and the Common Core State Standards for English Language Arts and Mathematics (CCSS Initiative-ELA, 2010; CCSS Initiative-Math, 2010), as well as a society that is equipped to deal with challenges in a way that is beneficial to our national and global community.
Physical basis for a thick ice shelf in the Arctic Basin during the penultimate glacial maximum
NASA Astrophysics Data System (ADS)
Gasson, E.; DeConto, R.; Pollard, D.; Clark, C.
2017-12-01
A thick ice shelf covering the Arctic Ocean during glacial stages was discussed in a number of publications in the 1970s. Although this hypothesis has received intermittent attention, the emergence of new geophysical evidence for ice grounding in water depths of up to 1 km in the central Arctic Basin has renewed interest into the physical plausibility and significance of an Arctic ice shelf. Various ice shelf configurations have been proposed, from an ice shelf restricted to the Amerasian Basin (the `minimum model') to a complete ice shelf cover in the Arctic. Attempts to simulate an Arctic ice shelf have been limited. Here we use a hybrid ice sheet / shelf model that has been widely applied to the Antarctic ice sheet to explore the potential for thick ice shelves forming in the Arctic Basin. We use a climate forcing appropriate for MIS6, the penultimate glacial maximum. We perform a number of experiments testing different ice sheet / shelf configurations and compare the model results with ice grounding locations and inferred flow directions. Finally, we comment on the potential significance of an Arctic ice shelf to the global glacial climate system.
Development of a mathematical model of the human cardiovascular system: An educational perspective
NASA Astrophysics Data System (ADS)
Johnson, Bruce Allen
A mathematical model of the human cardiovascular system will be a useful educational tool in biological sciences and bioengineering classrooms. The goal of this project is to develop a mathematical model of the human cardiovascular system that responds appropriately to variations of significant physical variables. Model development is based on standard fluid statics and dynamics principles, pressure-volume characteristics of the cardiac cycle, and compliant behavior of blood vessels. Cardiac cycle phases provide the physical and logical model structure, and Boolean algebra links model sections. The model is implemented using VisSim, a highly intuitive and easily learned block diagram modeling software package. Comparisons of model predictions of key variables to published values suggest that the model reasonably approximates expected behavior of those variables. The model responds plausibly to variations of independent variables. Projected usefulness of the model as an educational tool is threefold: independent variables which determine heart function may be easily varied to observe cause and effect; the model is used in an interactive setting; and the relationship of governing equations to model behavior is readily viewable and intuitive. Future use of this model in classrooms may give a more reasonable indication of its value as an educational tool.* *This dissertation includes a CD that is multimedia (contains text and other applications that are not available in a printed format). The CD requires the following applications: CorelPhotoHouse, CorelWordPerfect, VisSinViewer (included on CD), Internet access.
A two component model for thermal emission from organic grains in Comet Halley
NASA Technical Reports Server (NTRS)
Chyba, Christopher; Sagan, Carl
1988-01-01
Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.
Pairwise Force SPH Model for Real-Time Multi-Interaction Applications.
Yang, Tao; Martin, Ralph R; Lin, Ming C; Chang, Jian; Hu, Shi-Min
2017-10-01
In this paper, we present a novel pairwise-force smoothed particle hydrodynamics (PF-SPH) model to enable simulation of various interactions at interfaces in real time. Realistic capture of interactions at interfaces is a challenging problem for SPH-based simulations, especially for scenarios involving multiple interactions at different interfaces. Our PF-SPH model can readily handle multiple types of interactions simultaneously in a single simulation; its basis is to use a larger support radius than that used in standard SPH. We adopt a novel anisotropic filtering term to further improve the performance of interaction forces. The proposed model is stable; furthermore, it avoids the particle clustering problem which commonly occurs at the free surface. We show how our model can be used to capture various interactions. We also consider the close connection between droplets and bubbles, and show how to animate bubbles rising in liquid as well as bubbles in air. Our method is versatile, physically plausible and easy-to-implement. Examples are provided to demonstrate the capabilities and effectiveness of our approach.
Biologically Plausible, Human-Scale Knowledge Representation.
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-05-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), "mesh" binding (van der Velde & de Kamps, ), and conjunctive binding (Smolensky, ). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture (SPA) approach to modeling cognition (Eliasmith, ) do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. Copyright © 2015 Cognitive Science Society, Inc.
Physical Simulation for Probabilistic Motion Tracking
2008-01-01
learn a low- dimensional embedding of the high-dimensional kinematic data and then attempt to solve the problem in this more man- ageable low...rotations and foot skate ). Such artifacts can be attributed to the general lack of physically plausible priors [2] (that can account for static and/or...temporal priors of the form p(xf+1|xf ) = N (xf + γf ,Σ) (where γf is scaled velocity learned or inferred), have also been proposed [13] and shown to
Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.
Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter
2018-06-01
We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.
Wilsonian dark matter in string derived Z' model
NASA Astrophysics Data System (ADS)
Delle Rose, L.; Faraggi, A. E.; Marzo, C.; Rizos, J.
2017-09-01
The dark matter issue is among the most perplexing in contemporary physics. The problem is more enigmatic due to the wide range of possible solutions, ranging from the ultralight to the supermassive. String theory gives rise to plausible dark matter candidates due to the breaking of the non-Abelian grand unified theory (GUT) symmetries by Wilson lines. The physical spectrum then contains states that do not satisfy the quantization conditions of the unbroken GUT symmetry. Given that the Standard Model states are identified with broken GUT representations, and provided that any ensuing symmetry breakings are induced by components of GUT states, a remnant discrete symmetry remains that forbids the decay of the Wilsonian states. A class of such states are obtained in a heterotic-string-derived Z' model. The model exploits the spinor-vector duality symmetry, observed in the fermionic Z2×Z2 heterotic-string orbifolds, to generate a Z'∈E6 symmetry that may remain unbroken down to low energies. The E6 symmetry is broken at the string level with discrete Wilson lines. The Wilsonian dark matter candidates in the string-derived model are S O (10 ), and hence Standard Model, singlets and possess non-E6 U(1)Z' charges. Depending on the U(1)Z' breaking scale and the reheating temperature they give rise to different scenarios for the relic abundance, and are in accordance with the cosmological constraints.
Surface Modeling to Support Small-Body Spacecraft Exploration and Proximity Operations
NASA Technical Reports Server (NTRS)
Riedel, Joseph E.; Mastrodemos, Nickolaos; Gaskell, Robert W.
2011-01-01
In order to simulate physically plausible surfaces that represent geologically evolved surfaces, demonstrating demanding surface-relative guidance navigation and control (GN&C) actions, such surfaces must be made to mimic the geological processes themselves. A report describes how, using software and algorithms to model body surfaces as a series of digital terrain maps, a series of processes was put in place that evolve the surface from some assumed nominal starting condition. The physical processes modeled in this algorithmic technique include fractal regolith substrate texturing, fractally textured rocks (of empirically derived size and distribution power laws), cratering, and regolith migration under potential energy gradient. Starting with a global model that may be determined observationally or created ad hoc, the surface evolution is begun. First, material of some assumed strength is layered on the global model in a fractally random pattern. Then, rocks are distributed according to power laws measured on the Moon. Cratering then takes place in a temporal fashion, including modeling of ejecta blankets and taking into account the gravity of the object (which determines how much of the ejecta blanket falls back to the surface), and causing the observed phenomena of older craters being progressively buried by the ejecta of earlier impacts. Finally, regolith migration occurs which stratifies finer materials from coarser, as the fine material progressively migrates to regions of lower potential energy.
AgMIP Climate Data and Scenarios for Integrated Assessment. Chapter 3
NASA Technical Reports Server (NTRS)
Ruane, Alexander C.; Winter, Jonathan M.; McDermid, Sonali P.; Hudson, Nicholas I.
2015-01-01
Climate change presents a great challenge to the agricultural sector as changes in precipitation, temperature, humidity, and circulation patterns alter the climatic conditions upon which many agricultural systems rely. Projections of future climate conditions are inherently uncertain owing to a lack of clarity on how society will develop, policies that may be implemented to reduce greenhouse-gas (GHG) emissions, and complexities in modeling the atmosphere, ocean, land, cryosphere, and biosphere components of the climate system. Global climate models (GCMs) are based on well-established physics of each climate component that enable the models to project climate responses to changing GHG concentration scenarios (Stocker et al., 2013).The most recent iteration of the Coupled Model Intercomparison Project (CMIP5; Taylor et al., 2012) utilized representative concentration pathways (RCPs) to cover the range of plausible GHG concentrations out past the year 2100, with RCP8.5 representing an extreme scenario and RCP4.5 representing a lower concentrations scenario (Moss et al., 2010).
NASA Astrophysics Data System (ADS)
Green, D. N.; Neuberg, J.; Cayol, V.
2006-05-01
Surface deformations recorded in close proximity to the active lava dome at Soufrière Hills volcano, Montserrat, can be used to infer stresses within the uppermost 1000 m of the conduit system. Most deformation source models consider only isotropic pressurisation of the conduit. We show that tilt recorded during rapid magma extrusion in 1997 could have also been generated by shear stresses sustained along the conduit wall; these stresses are a consequence of pressure gradients that develop along the conduit. Numerical modelling, incorporating realistic topography, can reproduce both the morphology and half the amplitude of the measured deformation field using a realistic shear stress amplitude, equivalent to a pressure gradient of 3.5 × 104 Pa m-1 along a 1000 m long conduit with a 15 m radius. This shear stress model has advantages over the isotropic pressure models because it does not require either physically unattainable overpressures or source radii larger than 200 m to explain the same deformation.
Inner Structure in the TW Hya Circumstellar Disk
NASA Astrophysics Data System (ADS)
Akeson, Rachel L.; Millan-Gabet, R.; Ciardi, D.; Boden, A.; Sargent, A.; Monnier, J.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.
2011-05-01
TW Hya is a nearby (50 pc) young stellar object with an estimated age of 10 Myr and signs of active accretion. Previous modeling of the circumstellar disk has shown that the inner disk contains optically thin material, placing this object in the class of "transition disks". We present new near-infrared interferometric observations of the disk material and use these data, as well as previously published, spatially resolved data at 10 microns and 7 mm, to constrain disk models based on a standard flared disk structure. Our model demonstrates that the constraints imposed by the spatially resolved data can be met with a physically plausible disk but this requires a disk containing not only an inner gap in the optically thick disk as previously suggested, but also some optically thick material within this gap. Our model is consistent with the suggestion by previous authors of a planet with an orbital radius of a few AU. This work was conducted at the NASA Exoplanet Science Institute, California Institute of Technology.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Gard, Michael
2014-01-01
A grand convergence looms. It seems at least plausible that health and physical education may soon be lived by students in ways that are radically different from the past and sharply at odds with the imaginings of its founders and generations of academic aficionados. Perhaps in some respects, the differences will be superficial and less important…
Obayashi, Chihiro; Tamei, Tomoya; Shibata, Tomohiro
2014-05-01
This paper proposes a novel robotic trainer for motor skill learning. It is user-adaptive inspired by the assist-as-needed principle well known in the field of physical therapy. Most previous studies in the field of the robotic assistance of motor skill learning have used predetermined desired trajectories, and it has not been examined intensively whether these trajectories were optimal for each user. Furthermore, the guidance hypothesis states that humans tend to rely too much on external assistive feedback, resulting in interference with the internal feedback necessary for motor skill learning. A few studies have proposed a system that adjusts its assistive strength according to the user's performance in order to prevent the user from relying too much on the robotic assistance. There are, however, problems in these studies, in that a physical model of the user's motor system is required, which is inherently difficult to construct. In this paper, we propose a framework for a robotic trainer that is user-adaptive and that neither requires a specific desired trajectory nor a physical model of the user's motor system, and we achieve this using model-free reinforcement learning. We chose dart-throwing as an example motor-learning task as it is one of the simplest throwing tasks, and its performance can easily be and quantitatively measured. Training experiments with novices, aiming at maximizing the score with the darts and minimizing the physical robotic assistance, demonstrate the feasibility and plausibility of the proposed framework. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using physical models to study the gliding performance of extinct animals.
Koehl, M A R; Evangelista, Dennis; Yang, Karen
2011-12-01
Aerodynamic studies using physical models of fossil organisms can provide quantitative information about how performance of defined activities, such as gliding, depends on specific morphological features. Such analyses allow us to rule out hypotheses about the function of extinct organisms that are not physically plausible and to determine if and how specific morphological features and postures affect performance. The purpose of this article is to provide a practical guide for the design of dynamically scaled physical models to study the gliding of extinct animals using examples from our research on the theropod dinosaur, †Microraptor gui, which had flight feathers on its hind limbs as well as on its forelimbs. Analysis of the aerodynamics of †M. gui can shed light on the design of gliders with large surfaces posterior to the center of mass and provide functional information to evolutionary biologists trying to unravel the origins of flight in the dinosaurian ancestors and sister groups to birds. Measurements of lift, drag, side force, and moments in pitch, roll, and yaw on models in a wind tunnel can be used to calculate indices of gliding and parachuting performance, aerodynamic static stability, and control effectiveness in maneuvering. These indices permit the aerodynamic performance of bodies of different shape, size, stiffness, texture, and posture to be compared and thus can provide insights about the design of gliders, both biological and man-made. Our measurements of maximum lift-to-drag ratios of 2.5-3.1 for physical models of †M. gui suggest that its gliding performance was similar to that of flying squirrels and that the various leg postures that might have been used by †M. gui make little difference to that aspect of aerodynamic performance. We found that body orientation relative to the movement of air past the animal determines whether it is difficult or easy to maneuver.
New methods in hydrologic modeling and decision support for culvert flood risk under climate change
NASA Astrophysics Data System (ADS)
Rosner, A.; Letcher, B. H.; Vogel, R. M.; Rees, P. S.
2015-12-01
Assessing culvert flood vulnerability under climate change poses an unusual combination of challenges. We seek a robust method of planning for an uncertain future, and therefore must consider a wide range of plausible future conditions. Culverts in our case study area, northwestern Massachusetts, USA, are predominantly found in small, ungaged basins. The need to predict flows both at numerous sites and under numerous plausible climate conditions requires a statistical model with low data and computational requirements. We present a statistical streamflow model that is driven by precipitation and temperature, allowing us to predict flows without reliance on reference gages of observed flows. The hydrological analysis is used to determine each culvert's risk of failure under current conditions. We also explore the hydrological response to a range of plausible future climate conditions. These results are used to determine the tolerance of each culvert to future increases in precipitation. In a decision support context, current flood risk as well as tolerance to potential climate changes are used to provide a robust assessment and prioritization for culvert replacements.
(Tl, Sb) and (Tl, Bi) binary surface reconstructions on Ge(111) substrate
NASA Astrophysics Data System (ADS)
Gruznev, D. V.; Bondarenko, L. V.; Tupchaya, A. Y.; Yakovlev, A. A.; Mihalyuk, A. N.; Zotov, A. V.; Saranin, A. A.
2018-03-01
2D compounds made of Group-III and Group-V elements on the surface of silicon and germanium attract considerable attention due to prospects of creating III-V binary monolayers, which are predicted to hold advanced physical properties. In the present work, we have investigated two such systems, (Tl, Sb)/Ge(111) and (Tl, Bi)/Ge(111) using scanning tunneling microscopy, low energy electron diffraction observations and density-functional-theory calculations. In addition to the previously reported surface structures of 2D (Tl, Sb) and (Tl, Bi) compounds on Si(111), we found new ones, namely, √{ 7} × √{ 7} and 3 × 3. Formation processes and plausible models of their atomic arrangements are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattis, D.; Lemerise, A.; Ratick, S.
1995-12-31
The authors used physical, toxicological, and system dynamic modeling tools to estimate the probable ecological effects caused by residual chlorine and nitrogen in sewage effluent discharged into Greenwich Cove, RI, USA. An energy systems model of the pelagic ecosystem in Narragansett Bay was developed and adapted for use in Greenwich Cove. This model allowed them to assess the indirect effects on organisms in the food web that result from a direct toxic effect on a given organism. Indirect food web mediated effects were the primary mode of loss for bluefish, but not for menhaden. The authors chose gross primary production,more » the flux of carbon to the benthos, fish out-migration, and fish harvest as outcome variables indicative of different valuable ecosystem functions. Organism responses were modeled using an assumption that lethal toxic responses occur as individual organism thresholds are exceeded, and that in general thresholds are lognormally distributed in a population of mixed individuals. They performed sensitivity analyses to assess the implications of different plausible values for the probit slopes used in the model. The putative toxic damage repair rate, combined with estimates of the exposure variability for each species, determined the averaging time that was likely to be most important in producing toxicity. Temperature was an important external factor in the physical, toxicological, and ecological models. These three models can be integrated into a single model applicable to other locations and stressors given the availability of appropriate data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rim, Jung H.; Kuhn, Kevin J.; Tandon, Lav
Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials’ properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modelling feedback andmore » trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. In conclusion, based on this investigation, the most likely intended use for these plutonium foils was 239Pu fission foil targets for physics experiments, such as cross-section measurements, etc.« less
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
Physical Orbit for λ Virginis and a Test of Stellar Evolution Models
NASA Astrophysics Data System (ADS)
Zhao, M.; Monnier, J. D.; Torres, G.; Boden, A. F.; Claret, A.; Millan-Gabet, R.; Pedretti, E.; Berger, J.-P.; Traub, W. A.; Schloerb, F. P.; Carleton, N. P.; Kern, P.; Lacasse, M. G.; Malbet, F.; Perraut, K.
2007-04-01
The star λ Virginis is a well-known double-lined spectroscopic Am binary with the interesting property that both stars are very similar in abundance but one is sharp-lined and the other is broad-lined. We present combined interferometric and spectroscopic studies of λ Vir. The small scale of the λ Vir orbit (~20 mas) is well resolved by the Infrared Optical Telescope Array (IOTA), allowing us to determine its elements, as well as the physical properties of the components, to high accuracy. The masses of the two stars are determined to be 1.897 and 1.721 Msolar, with 0.7% and 1.5% errors, respectively, and the two stars are found to have the same temperature of 8280+/-200 K. The accurately determined properties of λ Vir allow comparisons between observations and current stellar evolution models, and reasonable matches are found. The best-fit stellar model gives λ Vir a subsolar metallicity of Z=0.0097 and an age of 935 Myr. The orbital and physical parameters of λ Vir also allow us to study its tidal evolution timescales and status. Although atomic diffusion is currently considered to be the most plausible cause of the Am phenomenon, the issue is still being actively debated in the literature. With the present study of the properties and evolutionary status of λ Vir, this system is an ideal candidate for further detailed abundance analyses that might shed more light on the source of the chemical anomalies in these A stars.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
The effect of food portion sizes on the obesity prevention using system dynamics modelling
NASA Astrophysics Data System (ADS)
Abidin, Norhaslinda Zainal; Zulkepli, Jafri Hj; Zaibidi, Nerda Zura
2014-09-01
The rise in income and population growth have increased the demand for food and induced changes in food habits, food purchasing and consumption patterns in Malaysia. With this transition, one of the plausible causes of weight gain and obesity is the frequent consumption of outside food which is synonymous with bigger portion size. Therefore, the aim of this paper is to develop a system dynamics model to analyse the effect of reducing food portion size on weight and obesity prevention. This study combines the different strands of knowledge comprise of nutrition, physical activity and body metabolism. These elements are synthesized into a system dynamics model called SIMULObese. Findings from this study suggested that changes in eating behavior should not emphasize only on limiting the food portion size consumption. The efforts should also consider other eating events such as controlling the meal frequency and limiting intake of high-calorie food in developing guidelines to prevent obesity.
SchNet - A deep learning architecture for molecules and materials
NASA Astrophysics Data System (ADS)
Schütt, K. T.; Sauceda, H. E.; Kindermans, P.-J.; Tkatchenko, A.; Müller, K.-R.
2018-06-01
Deep learning has led to a paradigm shift in artificial intelligence, including web, text, and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics. Machine learning, in general, and deep learning, in particular, are ideally suitable for representing quantum-mechanical interactions, enabling us to model nonlinear potential-energy surfaces or enhancing the exploration of chemical compound space. Here we present the deep learning architecture SchNet that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers. We demonstrate the capabilities of SchNet by accurately predicting a range of properties across chemical space for molecules and materials, where our model learns chemically plausible embeddings of atom types across the periodic table. Finally, we employ SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules and perform an exemplary study on the quantum-mechanical properties of C20-fullerene that would have been infeasible with regular ab initio molecular dynamics.
On Rosen's theory of gravity and cosmology
NASA Technical Reports Server (NTRS)
Barnes, R. C.
1980-01-01
Formal similarities between general relativity and Rosen's bimetric theory of gravity were used to analyze various bimetric cosmologies. The following results were found: (1) physically plausible model universes which have a flat static background metric, have a Robertson-Walker fundamental metric, and which allow co-moving coordinates do not exist in bimetric cosmology. (2) it is difficult to use the Robertson-Walker metric for both the background metric (gamma mu nu) and the fundamental metric tensor of Riemannian geometry( g mu nu) and require that g mu nu and gamma mu nu have different time dependences. (3) A consistency relation for using co-moving coordinates in bimetric cosmology was derived. (4) Certain spatially flat bimetric cosmologies of Babala were tested for the presence of particle horizons. (5) An analytic solution for Rosen's k = +1 model was found. (6) Rosen's singularity free k = +1 model arises from what appears to be an arbitary choice for the time dependent part of gamma mu nu.
Hu, Yipeng; Morgan, Dominic; Ahmed, Hashim Uddin; Pendsé, Doug; Sahu, Mahua; Allen, Clare; Emberton, Mark; Hawkes, David; Barratt, Dean
2008-01-01
A method is described for generating a patient-specific, statistical motion model (SMM) of the prostate gland. Finite element analysis (FEA) is used to simulate the motion of the gland using an ultrasound-based 3D FE model over a range of plausible boundary conditions and soft-tissue properties. By applying principal component analysis to the displacements of the FE mesh node points inside the gland, the simulated deformations are then used as training data to construct the SMM. The SMM is used to both predict the displacement field over the whole gland and constrain a deformable surface registration algorithm, given only a small number of target points on the surface of the deformed gland. Using 3D transrectal ultrasound images of the prostates of five patients, acquired before and after imposing a physical deformation, to evaluate the accuracy of predicted landmark displacements, the mean target registration error was found to be less than 1.9 mm.
NASA Astrophysics Data System (ADS)
Pohlman, Matthew Michael
The study of heat transfer and fluid flow in a vertical Bridgman device is motivated by current industrial difficulties in growing crystals with as few defects as possible. For example, Gallium Arsenide (GaAs) is of great interest to the semiconductor industry but remains an uneconomical alternative to silicon because of the manufacturing problems. This dissertation is a two dimensional study of the fluid in an idealized Bridgman device. The model nonlinear PDEs are discretized using second order finite differencing. Newton's method solves the resulting nonlinear discrete equations. The large sparse linear systems involving the Jacobian are solved iteratively using the Generalized Minimum Residual method (GMRES). By adapting fast direct solvers for elliptic equations with simple boundary conditions, a good preconditioner is developed which is essential for GMRES to converge quickly. Trends of the fluid flow and heat transfer for typical ranges of the physical parameters are determined. Also, the size of the terms in the mathematical model are found by numerical investigation, in order to find what terms are in balance as the physical parameters vary. The results suggest the plausibility of simpler asymptotic solutions.
Hu, Mingyang; de Jong, Djurre H; Marrink, Siewert J; Deserno, Markus
2013-01-01
We calculate the Gaussian curvature modulus kappa of a systematically coarse-grained (CG) one-component lipid membrane by applying the method recently proposed by Hu et al. [Biophys. J., 2012, 102, 1403] to the MARTINI representation of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC). We find the value kappa/kappa = -1.04 +/- 0.03 for the elastic ratio between the Gaussian and the mean curvature modulus and deduce kappa(m)/kappa(m) = -0.98 +/- 0.09 for the monolayer elastic ratio, where the latter is based on plausible assumptions for the distance z0 of the monolayer neutral surface from the bilayer midplane and the spontaneous lipid curvature K(0m). By also analyzing the lateral stress profile sigma0(z) of our system, two other lipid types and pertinent data from the literature, we show that determining K(0m) and kappa through the first and second moment of sigma0(z) gives rise to physically implausible values for these observables. This discrepancy, which we previously observed for a much simpler CG model, suggests that the moment conditions derived from simple continuum assumptions miss the effect of physically important correlations in the lipid bilayer.
Walder, J.S.
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Astrophysics Data System (ADS)
Prince, N. H. E.
2005-10-01
Meaning and purpose can be given to life, consciousness, the laws of physics, etc. If one assumes that the Universe is endowed with some form of (strong) anthropic principle. In particular, the final anthropic principle (FAP) of Barrow and Tipler postulates that intelligent life will continue in the Universe until the far future when the computational power of descendent civilizations will be sufficient to run simulations of enormous scale and power. Tipler has claimed that it will be possible to create simulations with rendered environments and inhabitants, i.e. intelligent software constructs, which are effectively ‘people’. Proponents of this FAP claim that if both substrate independence and the pattern identity postulate hold, then these simulations would be able to contain reanimated individuals that once lived. These claims have been heavily criticized but the growing study of physical eschatology, initiated by Freeman Dyson in a seminal work, and the developments in computational theory have made some progress in showing that simulations containing intelligent information processing software constructs, which may be conscious, are not only feasible but may be a reality within the next few centuries. In this work, arguments and conservative calculations are given which concur with these latter more minimal claims. FAP-type simulations inevitably rely on cosmology type, but current observations would seem to rule appropriate models out. However, it is argued that dark energy, described in the recent forms of ‘quintessence’ cosmological models may show the current conclusions from observations to be too presumptive. In this paper some relevant physical and cosmological aspects are reviewed in the light of the recent propositions regarding the plausibility of certain simulations given by Bostrom, and the longer held postulate of finite nature due to Fredkin which has grown in credibility, following advances in quantum mechanics and the computational theory of cellular automata. This latter postulate supports the conclusions of Bostrom, which, under certain plausible assumptions, can imply that our Universe is itself already a simulated entity. It is demonstrated in this paper how atemporal memory connections could make efficient ancestor simulations possible, solving many of the objections faced by the FAP of Barrow and Tipler. Also, if finite nature is true then it can offer a similar vindication to this FAP. Indeed the conclusions of this postulate can be realized more easily, but only if the existence of life within the simulation/Universe is not merely incidental to the (currently unknown) purpose for which it was generated to fulfil.
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Apoptosis generates mechanical forces that close the lens vesicle in the chick embryo
NASA Astrophysics Data System (ADS)
Oltean, Alina; Taber, Larry A.
2018-03-01
During the initial stages of eye development, optic vesicles grow laterally outward from both sides of the forebrain and come into contact with the surrounding surface ectoderm (SE). Within the region of contact, these layers then thicken locally to create placodes and invaginate to form the optic cup (primitive retina) and lens vesicle (LV), respectively. This paper examines the biophysical mechanisms involved in LV formation, which consists of three phases: (1) lens placode formation; (2) invagination to create the lens pit (LP); and (3) closure to form a complete ellipsoidally shaped LV. Previous studies have suggested that extracellular matrix deposited between the SE and optic vesicle causes the lens placode to form by locally constraining expansion of the SE as it grows, while actomyosin contraction causes this structure to invaginate. Here, using computational modeling and experiments on chick embryos, we confirm that these mechanisms for Phases 1 and 2 are physically plausible. Our results also suggest, however, that they are not sufficient to close the LP during Phase 3. We postulate that apoptosis provides an additional mechanism by removing cells near the LP opening, thereby decreasing its circumference and generating tension that closes the LP. This hypothesis is supported by staining that shows a ring of cell death located around the LP opening during closure. Inhibiting apoptosis in cultured embryos using caspase inhibitors significantly reduced LP closure, and results from a finite-element model indicate that closure driven by cell death is plausible. Taken together, our results suggest an important mechanical role for apoptosis in lens development.
Tidally modulated eruptions on Enceladus: Cassini ISS observations and models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimmo, Francis; Porco, Carolyn; Mitchell, Colin, E-mail: carolyn@ciclops.org
2014-09-01
We use images acquired by the Cassini Imaging Science Subsystem (ISS) to investigate the temporal variation of the brightness and height of the south polar plume of Enceladus. The plume's brightness peaks around the moon's apoapse, but with no systematic variation in scale height with either plume brightness or Enceladus' orbital position. We compare our results, both alone and supplemented with Cassini near-infrared observations, with predictions obtained from models in which tidal stresses are the principal control of the eruptive behavior. There are three main ways of explaining the observations: (1) the activity is controlled by right-lateral strike slip motion;more » (2) the activity is driven by eccentricity tides with an apparent time delay of about 5 hr; (3) the activity is driven by eccentricity tides plus a 1:1 physical libration with an amplitude of about 0.°8 (3.5 km). The second hypothesis might imply either a delayed eruptive response, or a dissipative, viscoelastic interior. The third hypothesis requires a libration amplitude an order of magnitude larger than predicted for a solid Enceladus. While we cannot currently exclude any of these hypotheses, the third, which is plausible for an Enceladus with a subsurface ocean, is testable by using repeat imaging of the moon's surface. A dissipative interior suggests that a regional background heat source should be detectable. The lack of a systematic variation in plume scale height, despite the large variations in plume brightness, is plausibly the result of supersonic flow; the details of the eruption process are yet to be understood.« less
Marine ice sheet model performance depends on basal sliding physics and sub-shelf melting
NASA Astrophysics Data System (ADS)
Gladstone, Rupert Michael; Warner, Roland Charles; Galton-Fenzi, Benjamin Keith; Gagliardini, Olivier; Zwinger, Thomas; Greve, Ralf
2017-01-01
Computer models are necessary for understanding and predicting marine ice sheet behaviour. However, there is uncertainty over implementation of physical processes at the ice base, both for grounded and floating glacial ice. Here we implement several sliding relations in a marine ice sheet flow-line model accounting for all stress components and demonstrate that model resolution requirements are strongly dependent on both the choice of basal sliding relation and the spatial distribution of ice shelf basal melting.Sliding relations that reduce the magnitude of the step change in basal drag from grounded ice to floating ice (where basal drag is set to zero) show reduced dependence on resolution compared to a commonly used relation, in which basal drag is purely a power law function of basal ice velocity. Sliding relations in which basal drag goes smoothly to zero as the grounding line is approached from inland (due to a physically motivated incorporation of effective pressure at the bed) provide further reduction in resolution dependence.A similar issue is found with the imposition of basal melt under the floating part of the ice shelf: melt parameterisations that reduce the abruptness of change in basal melting from grounded ice (where basal melt is set to zero) to floating ice provide improved convergence with resolution compared to parameterisations in which high melt occurs adjacent to the grounding line.Thus physical processes, such as sub-glacial outflow (which could cause high melt near the grounding line), impact on capability to simulate marine ice sheets. If there exists an abrupt change across the grounding line in either basal drag or basal melting, then high resolution will be required to solve the problem. However, the plausible combination of a physical dependency of basal drag on effective pressure, and the possibility of low ice shelf basal melt rates next to the grounding line, may mean that some marine ice sheet systems can be reliably simulated at a coarser resolution than currently thought necessary.
Causal discovery in the geosciences-Using synthetic data to learn how to interpret results
NASA Astrophysics Data System (ADS)
Ebert-Uphoff, Imme; Deng, Yi
2017-02-01
Causal discovery algorithms based on probabilistic graphical models have recently emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, thus finding pathways of interactions in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. The lack of ground truth is a typical problem when using causal discovery in the geosciences. Furthermore, while most of the connections found by this method match domain knowledge, we encountered one type of connection for which no explanation was found. To address both of these issues we developed a simulation framework that generates synthetic data of typical atmospheric processes (advection and diffusion). Applying the causal discovery algorithm to the synthetic data allowed us (1) to develop a better understanding of how these physical processes appear in the resulting connectivity graphs, and thus how to better interpret such connectivity graphs when obtained from real-world data; (2) to solve the mystery of the previously unexplained connections.
Self-gravity, self-consistency, and self-organization in geodynamics and geochemistry
NASA Astrophysics Data System (ADS)
Anderson, Don L.
The results of seismology and geochemistry for mantle structure are widely believed to be discordant, the former favoring whole-mantle convection and the latter favoring layered convection with a boundary near 650 km. However, a different view arises from recognizing effects usually ignored in the construction of these models, including physical plausibility and dimensionality. Self-compression and expansion affect material properties that are important in all aspects of mantle geochemistry and dynamics, including the interpretation of tomographic images. Pressure compresses a solid and changes physical properties that depend on volume and does so in a highly nonlinear way. Intrinsic, anelastic, compositional, and crystal structure effects control seismic velocities; temperature is not the only parameter, even though tomographic images are often treated as temperature maps. Shear velocity is not a good proxy for density, temperature, and composition or for other elastic constants. Scaling concepts are important in mantle dynamics, equations of state, and wherever it is necessary to extend laboratory experiments to the parameter range of the Earth's mantle. Simple volume-scaling relations that permit extrapolation of laboratory experiments, in a thermodynamically self-consistent way, to deep mantle conditions include the quasiharmonic approximation but not the Boussinesq formalisms. Whereas slabs, plates, and the upper thermal boundary layer of the mantle have characteristic thicknesses of hundreds of kilometers and lifetimes on the order of 100 million years, volume-scaling predicts values an order of magnitude higher for deep-mantle thermal boundary layers. This implies that deep-mantle features are sluggish and ancient. Irreversible chemical stratification is consistent with these results; plausible temperature variations in the deep mantle cause density variations that are smaller than the probable density contrasts across chemical interfaces created by accretional differentiation and magmatic processes. Deep-mantle features may be convectively isolated from upper-mantle processes. Plate tectonics and surface geochemical cycles appear to be entirely restricted to the upper ˜1,000 km. The 650-km discontinuity is mainly an isochemical phase change but major-element chemical boundaries may occur at other depths. Recycling laminates the upper mantle and also makes it statistically heterogeneous, in agreement with high-frequency scattering studies. In contrast to standard geochemical models and recent modifications, the deeper layers need not be accessible to surface volcanoes. There is no conflict between geophysical and geochemical data, but a physical basis for standard geochemical and geodynamic mantle models, including the two-layer and whole-mantle versions, and qualitative tomographic interpretations has been lacking.
Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling
Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569
Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.
Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.
NASA Astrophysics Data System (ADS)
Wiebe, K.; Lotze-Campen, H.; Bodirsky, B.; Kavallari, A.; Mason-d'Croz, D.; van der Mensbrugghe, D.; Robinson, S.; Sands, R.; Tabeau, A.; Willenbockel, D.; Islam, S.; van Meijl, H.; Mueller, C.; Robertson, R.
2014-12-01
Previous studies have combined climate, crop and economic models to examine the impact of climate change on agricultural production and food security, but results have varied widely due to differences in models, scenarios and data. Recent work has examined (and narrowed) these differences through systematic model intercomparison using a high-emissions pathway to highlight the differences. New work extends that analysis to cover a range of plausible socioeconomic scenarios and emission pathways. Results from three general circulation models are combined with one crop model and five global economic models to examine the global and regional impacts of climate change on yields, area, production, prices and trade for coarse grains, rice, wheat, oilseeds and sugar to 2050. Results show that yield impacts vary with changes in population, income and technology as well as emissions, but are reduced in all cases by endogenous changes in prices and other variables.
Comparing supply and demand models for future photovoltaic power generation in the USA
Basore, Paul A.; Cole, Wesley J.
2018-02-22
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Cole, Wesley J.
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Parsons, T J; Thomas, C; Power, C
2009-08-01
To investigate patterns of, and associations between, physical activity at work and in leisure time, television viewing and computer use. 4531 men and 4594 women with complete plausible data, age 44-45 years, participating in the 1958 British birth cohort study. Physical activity, television viewing and computer use (hours/week) were estimated using a self-complete questionnaire and intensity (MET hours/week) derived for physical activity. Relationships were investigated using linear regression and chi(2) tests. From a target sample of 11,971, 9223 provided information on physical activity, of whom 75 and 47% provided complete and plausible activity data on work and leisure time activity respectively. Men and women spent a median of 40.2 and 34.2 h/week, respectively in work activity, and 8.3 and 5.8 h/week in leisure activity. Half of all participants watched television for > or =2 h/day, and half used a computer for <1 h/day. Longer work hours were not associated with a shorter duration of leisure activity, but were associated with a shorter duration of computer use (men only). In men, higher work MET hours were associated with higher leisure-time MET hours, and shorter durations of television viewing and computer use. Watching more television was related to fewer hours or MET hours of leisure activity, as was longer computer use in men. Longer computer use was related to more hours (or MET hours) in leisure activities in women. Physical activity levels at work and in leisure time in mid-adulthood are low. Television viewing (and computer use in men) may compete with leisure activity for time, whereas longer duration of work hours is less influential. To change active and sedentary behaviours, better understanding of barriers and motivators is needed.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
ERIC Educational Resources Information Center
Loke, Swee-Kin; Golding, Clinton
2016-01-01
This article addresses learning in desktop virtual worlds where students role play for professional education. When students role play in such virtual worlds, they can learn some knowledge and skills that are useful in the physical world. However, existing learning theories do not provide a plausible explanation of how performing non-verbal…
Barry, Dwight; McDonald, Shea
2013-01-01
Climate change could significantly influence seasonal streamflow and water availability in the snowpack-fed watersheds of Washington, USA. Descriptions of snowpack decline often use linear ordinary least squares (OLS) models to quantify this change. However, the region's precipitation is known to be related to climate cycles. If snowpack decline is more closely related to these cycles, an OLS model cannot account for this effect, and thus both descriptions of trends and estimates of decline could be inaccurate. We used intervention analysis to determine whether snow water equivalent (SWE) in 25 long-term snow courses within the Olympic and Cascade Mountains are more accurately described by OLS (to represent gradual change), stationary (to represent no change), or step-stationary (to represent climate cycling) models. We used Bayesian information-theoretic methods to determine these models' relative likelihood, and we found 90 models that could plausibly describe the statistical structure of the 25 snow courses' time series. Posterior model probabilities of the 29 "most plausible" models ranged from 0.33 to 0.91 (mean = 0.58, s = 0.15). The majority of these time series (55%) were best represented as step-stationary models with a single breakpoint at 1976/77, coinciding with a major shift in the Pacific Decadal Oscillation. However, estimates of SWE decline differed by as much as 35% between statistically plausible models of a single time series. This ambiguity is a critical problem for water management policy. Approaches such as intervention analysis should become part of the basic analytical toolkit for snowpack or other climatic time series data.
A 3D model for rubber tyres contact, based on Kalker's methods through the STRIPES model
NASA Astrophysics Data System (ADS)
Chollet, Hugues
2012-01-01
A project on the pavement-rutting evolution under the effect of a tram on tyre, led the author to make a link between road and railway approaches to the problem of rolling contact. A simplified model is proposed with a fine description of the contact patch between a tyre and the road, and a more realistic pressure and shear stresses distribution than that available from basic models previously available. Experimental measurements are used to identify some characteristics of the force description, while the geometric shape of the tyre-road section are used, like in the traditional rail-wheel contact models, to build the 3D model. The last part validates a plausible contact pressure shape from self-aligning torque measurements and from Kalker's contact stresses gradient applied to the real tyre used in the project. The final result is a brush model extended from the wheel-rail STRIPES one, applicable to dynamics or contact studies of real tyres, with a physical coupling between longitudinal, lateral and spin effects, and a relatively fine description of the contact stresses along each strip of each tyre of the vehicle on an uneven road.
Giant impactors - Plausible sizes and populations
NASA Technical Reports Server (NTRS)
Hartmann, William K.; Vail, S. M.
1986-01-01
The largest sizes of planetesimals required to explain spin properties of planets are investigated in the context of the impact-trigger hypothesis of lunar origin. Solar system models with different large impactor sources are constructed and stochastic variations in obliquities and rotation periods resulting from each source are studied. The present study finds it highly plausible that earth was struck by a body of about 0.03-0.12 earth masses with enough energy and angular momentum to dislodge mantle material and form the present earth-moon system.
Surma, Szymon; Pakhomov, Evgeny A.; Pitcher, Tony J.
2014-01-01
The aim of this study was to examine the ecological plausibility of the “krill surplus” hypothesis and the effects of whaling on the Southern Ocean food web using mass-balance ecosystem modelling. The depletion trajectory and unexploited biomass of each rorqual population in the Antarctic was reconstructed using yearly catch records and a set of species-specific surplus production models. The resulting estimates of the unexploited biomass of Antarctic rorquals were used to construct an Ecopath model of the Southern Ocean food web existing in 1900. The rorqual depletion trajectory was then used in an Ecosim scenario to drive rorqual biomasses and examine the “krill surplus” phenomenon and whaling effects on the food web in the years 1900–2008. An additional suite of Ecosim scenarios reflecting several hypothetical trends in Southern Ocean primary productivity were employed to examine the effect of bottom-up forcing on the documented krill biomass trend. The output of the Ecosim scenarios indicated that while the “krill surplus” hypothesis is a plausible explanation of the biomass trends observed in some penguin and pinniped species in the mid-20th century, the excess krill biomass was most likely eliminated by a rapid decline in primary productivity in the years 1975–1995. Our findings suggest that changes in physical conditions in the Southern Ocean during this time period could have eliminated the ecological effects of rorqual depletion, although the mechanism responsible is currently unknown. Furthermore, a decline in iron bioavailability due to rorqual depletion may have contributed to the rapid decline in overall Southern Ocean productivity during the last quarter of the 20th century. The results of this study underscore the need for further research on historical changes in the roles of top-down and bottom-up forcing in structuring the Southern Ocean food web. PMID:25517505
Surma, Szymon; Pakhomov, Evgeny A; Pitcher, Tony J
2014-01-01
The aim of this study was to examine the ecological plausibility of the "krill surplus" hypothesis and the effects of whaling on the Southern Ocean food web using mass-balance ecosystem modelling. The depletion trajectory and unexploited biomass of each rorqual population in the Antarctic was reconstructed using yearly catch records and a set of species-specific surplus production models. The resulting estimates of the unexploited biomass of Antarctic rorquals were used to construct an Ecopath model of the Southern Ocean food web existing in 1900. The rorqual depletion trajectory was then used in an Ecosim scenario to drive rorqual biomasses and examine the "krill surplus" phenomenon and whaling effects on the food web in the years 1900-2008. An additional suite of Ecosim scenarios reflecting several hypothetical trends in Southern Ocean primary productivity were employed to examine the effect of bottom-up forcing on the documented krill biomass trend. The output of the Ecosim scenarios indicated that while the "krill surplus" hypothesis is a plausible explanation of the biomass trends observed in some penguin and pinniped species in the mid-20th century, the excess krill biomass was most likely eliminated by a rapid decline in primary productivity in the years 1975-1995. Our findings suggest that changes in physical conditions in the Southern Ocean during this time period could have eliminated the ecological effects of rorqual depletion, although the mechanism responsible is currently unknown. Furthermore, a decline in iron bioavailability due to rorqual depletion may have contributed to the rapid decline in overall Southern Ocean productivity during the last quarter of the 20th century. The results of this study underscore the need for further research on historical changes in the roles of top-down and bottom-up forcing in structuring the Southern Ocean food web.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models
NASA Astrophysics Data System (ADS)
Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.
2013-12-01
We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duarte, Pedro; Meyer, Amelie; Olsen, Lasse M.
Here, large changes in the sea ice regime of the Arctic Ocean have occurred over the last decades justifying the development of models to forecast sea ice physics and biogeochemistry. The main goal of this study is to evaluate the performance of the Los Alamos Sea Ice Model (CICE) to simulate physical and biogeochemical properties at time scales of a few weeks and to use the model to analyze ice algal bloom dynamics in different types of ice. Ocean and atmospheric forcing data and observations of the evolution of the sea ice properties collected from 18 April to 4 Junemore » 2015, during the Norwegian young sea ICE expedition, were used to test the CICE model. Our results show the following: (i) model performance is reasonable for sea ice thickness and bulk salinity; good for vertically resolved temperature, vertically averaged Chl a concentrations, and standing stocks; and poor for vertically resolved Chl a concentrations. (ii) Improving current knowledge about nutrient exchanges, ice algal recruitment, and motion is critical to improve sea ice biogeochemical modeling. (iii) Ice algae may bloom despite some degree of basal melting. (iv) Ice algal motility driven by gradients in limiting factors is a plausible mechanism to explain their vertical distribution. (v) Different ice algal bloom and net primary production (NPP) patterns were identified in the ice types studied, suggesting that ice algal maximal growth rates will increase, while sea ice vertically integrated NPP and biomass will decrease as a result of the predictable increase in the area covered by refrozen leads in the Arctic Ocean.« less
Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results
Duarte, Pedro; Meyer, Amelie; Olsen, Lasse M.; ...
2017-06-08
Here, large changes in the sea ice regime of the Arctic Ocean have occurred over the last decades justifying the development of models to forecast sea ice physics and biogeochemistry. The main goal of this study is to evaluate the performance of the Los Alamos Sea Ice Model (CICE) to simulate physical and biogeochemical properties at time scales of a few weeks and to use the model to analyze ice algal bloom dynamics in different types of ice. Ocean and atmospheric forcing data and observations of the evolution of the sea ice properties collected from 18 April to 4 Junemore » 2015, during the Norwegian young sea ICE expedition, were used to test the CICE model. Our results show the following: (i) model performance is reasonable for sea ice thickness and bulk salinity; good for vertically resolved temperature, vertically averaged Chl a concentrations, and standing stocks; and poor for vertically resolved Chl a concentrations. (ii) Improving current knowledge about nutrient exchanges, ice algal recruitment, and motion is critical to improve sea ice biogeochemical modeling. (iii) Ice algae may bloom despite some degree of basal melting. (iv) Ice algal motility driven by gradients in limiting factors is a plausible mechanism to explain their vertical distribution. (v) Different ice algal bloom and net primary production (NPP) patterns were identified in the ice types studied, suggesting that ice algal maximal growth rates will increase, while sea ice vertically integrated NPP and biomass will decrease as a result of the predictable increase in the area covered by refrozen leads in the Arctic Ocean.« less
Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results
NASA Astrophysics Data System (ADS)
Duarte, Pedro; Meyer, Amelie; Olsen, Lasse M.; Kauko, Hanna M.; Assmy, Philipp; Rösel, Anja; Itkin, Polona; Hudson, Stephen R.; Granskog, Mats A.; Gerland, Sebastian; Sundfjord, Arild; Steen, Harald; Hop, Haakon; Cohen, Lana; Peterson, Algot K.; Jeffery, Nicole; Elliott, Scott M.; Hunke, Elizabeth C.; Turner, Adrian K.
2017-07-01
Large changes in the sea ice regime of the Arctic Ocean have occurred over the last decades justifying the development of models to forecast sea ice physics and biogeochemistry. The main goal of this study is to evaluate the performance of the Los Alamos Sea Ice Model (CICE) to simulate physical and biogeochemical properties at time scales of a few weeks and to use the model to analyze ice algal bloom dynamics in different types of ice. Ocean and atmospheric forcing data and observations of the evolution of the sea ice properties collected from 18 April to 4 June 2015, during the Norwegian young sea ICE expedition, were used to test the CICE model. Our results show the following: (i) model performance is reasonable for sea ice thickness and bulk salinity; good for vertically resolved temperature, vertically averaged Chl a concentrations, and standing stocks; and poor for vertically resolved Chl a concentrations. (ii) Improving current knowledge about nutrient exchanges, ice algal recruitment, and motion is critical to improve sea ice biogeochemical modeling. (iii) Ice algae may bloom despite some degree of basal melting. (iv) Ice algal motility driven by gradients in limiting factors is a plausible mechanism to explain their vertical distribution. (v) Different ice algal bloom and net primary production (NPP) patterns were identified in the ice types studied, suggesting that ice algal maximal growth rates will increase, while sea ice vertically integrated NPP and biomass will decrease as a result of the predictable increase in the area covered by refrozen leads in the Arctic Ocean.
Spectral properties of blast-wave models of gamma-ray burst sources
NASA Technical Reports Server (NTRS)
Meszaros, P.; Rees, M. J.; Papathanassiou, H.
1994-01-01
We calculate the spectrum of blast-wave models of gamma-ray burst sources, for various assumptions about the magnetic field density and the relativistic particle acceleration efficiency. For a range of physically plausible models we find that the radiation efficiency is high and leads to nonthermal spectra with breaks at various energies comparable to those observed in the gamma-ray range. Radiation is also predicted at other wavebands, in particular at X-ray, optical/UV, and GeV/TeV energies. We discuss the spectra as a function of duration for three basic types of models, and for cosmological, halo, and galactic disk distances. We also evaluate the gamma-ray fluences and the spectral characteristics for a range of external densities. Impulsive burst models at cosmological distances can satisfy the conventional X-ray paucity constraint S(sub x)/S(sub gamma)less than a few percent over a wide range of durations, but galactic models can do so only for bursts shorter than a few seconds, unless additional assumptions are made. The emissivity is generally larger for bursts in a denser external environment, with the efficiency increasing up to the point where all the energy input is radiated away.
Comparison of GEANT4 very low energy cross section models with experimental data in water.
Incerti, S; Ivanchenko, A; Karamitros, M; Mantero, A; Moretto, P; Tran, H N; Mascialino, B; Champion, C; Ivanchenko, V N; Bernal, M A; Francis, Z; Villagrasa, C; Baldacchin, G; Guèye, P; Capra, R; Nieminen, P; Zacharatou, C
2010-09-01
The GEANT4 general-purpose Monte Carlo simulation toolkit is able to simulate physical interaction processes of electrons, hydrogen and helium atoms with charge states (H0, H+) and (He0, He+, He2+), respectively, in liquid water, the main component of biological systems, down to the electron volt regime and the submicrometer scale, providing GEANT4 users with the so-called "GEANT4-DNA" physics models suitable for microdosimetry simulation applications. The corresponding software has been recently re-engineered in order to provide GEANT4 users with a coherent and unique approach to the simulation of electromagnetic interactions within the GEANT4 toolkit framework (since GEANT4 version 9.3 beta). This work presents a quantitative comparison of these physics models with a collection of experimental data in water collected from the literature. An evaluation of the closeness between the total and differential cross section models available in the GEANT4 toolkit for microdosimetry and experimental reference data is performed using a dedicated statistical toolkit that includes the Kolmogorov-Smirnov statistical test. The authors used experimental data acquired in water vapor as direct measurements in the liquid phase are not yet available in the literature. Comparisons with several recommendations are also presented. The authors have assessed the compatibility of experimental data with GEANT4 microdosimetry models by means of quantitative methods. The results show that microdosimetric measurements in liquid water are necessary to assess quantitatively the validity of the software implementation for the liquid water phase. Nevertheless, a comparison with existing experimental data in water vapor provides a qualitative appreciation of the plausibility of the simulation models. The existing reference data themselves should undergo a critical interpretation and selection, as some of the series exhibit significant deviations from each other. The GEANT4-DNA physics models available in the GEANT4 toolkit have been compared in this article to available experimental data in the water vapor phase as well as to several published recommendations on the mass stopping power. These models represent a first step in the extension of the GEANT4 Monte Carlo toolkit to the simulation of biological effects of ionizing radiation.
Piantadosi, Steven T.; Hayden, Benjamin Y.
2015-01-01
Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613
Long-period seismology on Europa: 1. Physically consistent interior models
NASA Astrophysics Data System (ADS)
Cammarano, F.; Lekic, V.; Manga, M.; Panning, M.; Romanowicz, B.
2006-12-01
In order to examine the potential of seismology to determine the interior structure and properties of Europa, it is essential to calculate seismic velocities and attenuation for the range of plausible interiors. We calculate a range of models for the physical structure of Europa, as constrained by the satellite's composition, mass, and moment of inertia. We assume a water-ice shell, a pyrolitic or a chondritic mantle, and a core composed of pure iron or iron plus 20 weight percent of sulfur. We consider two extreme mantle thermal states: hot and cold. Given a temperature and composition, we determine density, seismic velocities, and attenuation using thermodynamical models. While anelastic effects will be negligible in a cold mantle and the brittle part of the ice shell, strong dispersion and dissipation are expected in a hot convective mantle and the bulk of the ice shell. There is a strong relationship between different thermal structures and compositions. The ``hot'' mantle may maintain temperatures consistent with a liquid core made of iron plus light elements. For the ``cold scenarios,'' the possibility of a solid iron core cannot be excluded, and it may even be favored. The depths of the ocean and core-mantle boundary are determined with high precision, 10 km and 40 km, respectively, once we assume a composition and thermal structure. Furthermore, the depth of the ocean is relatively insensitive (4 km) to the core composition used.
Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.
2014-01-01
The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623
Shi, Yunfei; Yao, Jiang; Young, Jonathan M; Fee, Judy A; Perucchio, Renato; Taber, Larry A
2014-01-01
The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study.
Rapid variation in the circumstellar 10 micron emission of Alpha Orionis
NASA Technical Reports Server (NTRS)
Bloemhof, E. E.; Danchi, W. C.; Townes, C. H.
1985-01-01
The spatial distribution of 10 micron continuum flux around the supergiant star Alpha Orionis was measured on two occasions separated by an interval of 1 yr. A significant change in the infrared radiation pattern on the subarcsecond scale was observed. This change cannot be explained plausibly by macroscopic motion but may be due to a change in the physical properties of the circumstellar dust.
Possible relationships between solar activity and meteorological phenomena
NASA Technical Reports Server (NTRS)
Bandeen, W. R. (Editor); Maran, S. P. (Editor)
1975-01-01
A symposium was conducted in which the following questions were discussed: (1) the evidence concerning possible relationships between solar activity and meteorological phenomena; (2) plausible physical mechanisms to explain these relationships; and (3) kinds of critical measurements needed to determine the nature of solar/meteorological relationships and/or the mechanisms to explain them, and which of these measurements can be accomplished best from space.
[Predicting Spectra of Accretion Disks Around Galactic Black Holes
NASA Technical Reports Server (NTRS)
Krolik, Julian H.
2004-01-01
The purpose of this grant was to construct detailed atmosphere solutions in order to predict the spectra of accretion disks around Galactic black holes. Our plan of action was to take an existing disk atmosphere code (TLUSTY, created by Ivan Hubeny) and introduce those additional physical processes necessary to make it applicable to disks of this variety. These modifications include: treating Comptonization; introducing continuous opacity due to heavy elements; incorporating line opacity due to heavy elements; adopting a disk structure that reflects readjustments due to radiation pressure effects; and injecting heat via a physically-plausible vertical distribution.
DOT National Transportation Integrated Search
2002-01-01
Business models and cost recovery are the critical factors for determining the sustainability of the traveler information service, and 511. In March 2001 the Policy Committee directed the 511 Working Group to investigate plausible business models and...
Proof of age required--estimating age in adults without birth records.
Phillips, Christine; Narayanasamy, Shanti
2010-07-01
Many adults from refugee source countries do not have documents of birth, either because they have been lost in flight, or because the civil infrastructure is too fragile to support routine recording of birth. In Western countries, date of birth is used as a basic identifier, and access to services and support tends to be age regulated. Doctors are not infrequently asked to write formal reports estimating the true age of adult refugees; however, there are no existing guidelines to assist in this task. To provide an overview of methods to estimate age in living adults, and outline recommendations for best practice. Age should be estimated through physical examination; life history, matching local or national events with personal milestones; and existing nonformal documents. Accuracy of age estimation should be subject to three tests: biological plausibility, historical plausibility, and corroboration from reputable sources.
Phenomenology of pure-gauge hidden valleys at hadron colliders
NASA Astrophysics Data System (ADS)
Juknevich, Jose E.
Expectations for new physics at the LHC have been greatly influenced by the Hierarchy problem of electroweak symmetry breaking. However, there are reasons to believe that the LHC may still discover new physics, but not directly related to the resolution of the Hierarchy problem. To ensure that such a physics does not go undiscovered requires precise understanding of how new phenomena will reveal themselves in the current and future generation of particle-physics experiments. Given this fact it seems sensible to explore other approaches to this problem; we study three alternatives here. In this thesis I argue for the plausibility that the standard model is coupled, through new massive charged or colored particles, to a hidden sector whose low energy dynamics is controlled by a pure Yang-Mills theory, with no light matter. Such a sector would have numerous metastable "hidden glueballs" built from the hidden gluons. These states would decay to particles of the standard model. I consider the phenomenology of this scenario, and find formulas for the lifetimes and branching ratios of the most important of these states. The dominant decays are to two standard model gauge bosons or to fermion-antifermion pairs, or by radiative decays with photon or Higgs emission, leading to jet- and photon-rich signals, and some occasional leptons. The presence of effective operators of different mass dimensions, often competing with each other, together with a great diversity of states, leads to a great variability in the lifetimes and decay modes of the hidden glueballs. I find that most of the operators considered in this work are not heavily constrained by precision electroweak physics, therefore leaving plenty of room in the parameter space to be explored by the future experiments at the LHC. Finally, I discuss several issues on the phenomenology of the new massive particles as well as an outlook for experimental searches.
Rind, Esther; Jones, Andy; Southall, Humphrey
2014-03-01
In recent decades, the prevalence of physical activity has declined considerably in many developed countries, which has been related to rising levels of obesity and several weight-related medical conditions, such as coronary heart disease. There is evidence that areas exhibiting particularly low levels of physical activity have undergone a strong transition away from employment in physically demanding occupations. It is proposed that such processes of deindustrialisation may be causally linked to unexplained geographical disparities in physical activity. This study investigates how geographical variations in deindustrialisation are associated with current levels of physical activity across different activity domains and relevant macro-economic time periods in England. The analysis includes data on 27,414 adults from the Health Survey for England 2006 and 2008 who reported total, occupational, domestic, recreational and walking activity. Based on employment change in industries associated with heavy manual work, a local measurement of industrial decline was developed, covering the period 1841-2001. We applied a multilevel modelling approach to study associations between industrial decline and physical activity. Results indicate that the process of deindustrialisation appears to be associated with patterns of physical activity and that this is independent of household income. The effects observed were generally similar for men and women. However, the nature of the association differed across areas, time periods and employment types; in particular, residents of districts characterised by a history of manufacturing and mining employment had increased odds of reporting low activity levels. We conclude that post-industrial change may be a factor in explaining present-day variations in physical activity, emphasising the plausible impact of inherited cultures and regional identities on health related behaviours. Copyright © 2013 Elsevier Ltd. All rights reserved.
Understanding extreme quasar optical variability with CRTS - I. Major AGN flares
NASA Astrophysics Data System (ADS)
Graham, Matthew J.; Djorgovski, S. G.; Drake, Andrew J.; Stern, Daniel; Mahabal, Ashish A.; Glikman, Eilat; Larson, Steve; Christensen, Eric
2017-10-01
There is a large degree of variety in the optical variability of quasars and it is unclear whether this is all attributable to a single (set of) physical mechanism(s). We present the results of a systematic search for major flares in active galactic nucleus (AGN) in the Catalina Real-time Transient Survey as part of a broader study into extreme quasar variability. Such flares are defined in a quantitative manner as being atop of the normal, stochastic variability of quasars. We have identified 51 events from over 900 000 known quasars and high-probability quasar candidates, typically lasting 900 d and with a median peak amplitude of Δm = 1.25 mag. Characterizing the flare profile with a Weibull distribution, we find that nine of the sources are well described by a single-point single-lens model. This supports the proposal by Lawrence et al. that microlensing is a plausible physical mechanism for extreme variability. However, we attribute the majority of our events to explosive stellar-related activity in the accretion disc: superluminous supernovae, tidal disruption events and mergers of stellar mass black holes.
NASA Astrophysics Data System (ADS)
Urata, T.; Tanabe, Y.; Huynh, K. K.; Yamakawa, Y.; Kontani, H.; Tanigaki, K.
2016-01-01
In high-superconducting transition temperature (Tc) iron-based superconductors, interband sign reversal (s±) and sign preserving (s++) s -wave superconducting states have been primarily discussed as the plausible superconducting mechanism. We study Co impurity scattering effects on the superconductivity in order to achieve an important clue on the pairing mechanism using single-crystal Fe1 -xCoxSe and depict a phase diagram of a FeSe system. Both superconductivity and structural transition/orbital order are suppressed by the Co replacement on the Fe sites and disappear above x = 0.036. These correlated suppressions represent a common background physics behind these physical phenomena in the multiband Fermi surfaces of FeSe. By comparing experimental data and theories so far proposed, the suppression of Tc against the residual resistivity is shown to be much weaker than that predicted in the case of general sign reversal and full gap s± models. The origin of the superconducting paring in FeSe is discussed in terms of its multiband electronic structure.
Armstrong-Brown, Janelle; Eng, Eugenia; Hammond, Wizdom Powell; Zimmer, Catherine; Bowling, J. Michael
2016-01-01
Physical inactivity is one of the factors contributing to disproportionate disease rates among older African Americans. Previous literature indicates that older African Americans are more likely to live in racially segregated neighborhoods and that racial residential segregation is associated with limited opportunities for physical activity. A cross-sectional mixed methods study was conducted guided by the concept of therapeutic landscapes. Multilevel regression analyses demonstrated that racial residential segregation was associated with more minutes of physical activity and greater odds of meeting physical activity recommendations. Qualitative interviews revealed the following physical activity related themes: aging of the neighborhood, knowing your neighbors, feeling of safety, and neighborhood racial identity. Perceptions of social cohesion enhanced participants’ physical activity, offering a plausible explanation to the higher rates of physical activity found in this population. Understanding how social cohesion operates within racially segregated neighborhoods can help to inform the design of effective interventions for this population. PMID:24812201
Legrand, Fabien D
2014-08-01
We examined the possible mediating role of physical self-perceptions, physical self-esteem, and global self-esteem in the relationships between exercise and depression in a group of socioeconomically disadvantaged women with elevated symptoms of depression. Forty-four female residents of a low-income housing complex were randomized into a 7-week-long exercise-training group or a wait-list group. Depression, physical self-perceptions and self-esteem were measured repeatedly. Significant changes were found for depression, self-esteem, physical self-worth, and self-perceived physical condition in the exercise-training group. Intent-to-treat analyses did not alter the results. Most of the reduction in depression occurred between Week 2 and Week 4 while initial improvement in physical self-worth and self-perceived physical condition was observed between baseline and Week 2. These variables can be seen as plausible mechanisms for effects of exercise on depression.
Maffeis, Claudio; Schutz, Yves; Fornari, Elena; Marigliano, Marco; Tomasselli, Francesca; Tommasi, Mara; Chini, Veronica; Morandi, Anita
2017-05-01
An assessment of total daily energy intake is helpful in planning the overall treatment of children with type 1 diabetes (T1D). However, energy intake misreporting may hinder nutritional intervention. To assess the plausibility of energy intake reporting and the potential role of gender, body mass index (BMI) z-score (z-BMI), disease duration and insulin requirement in energy intake misreporting in a sample of children and adolescents with T1D. The study included 58 children and adolescents aged 8-16 yr with T1D. Anthropometry, blood pressure and glycated hemoglobin (HbA1c) were measured. Subjects were instructed to wear a SenseWear Pro Armband (SWA) for 3 consecutive days, including a weekend day and to fill out with their parents a weighed dietary record for the same days. Predicted energy expenditure (pEE) was calculated by age and gender specific equations, including gender, age, weight, height and physical activity level (assessed by SWA). The percent reported energy intake (rEI)/pEE ratio was used as an estimate of the plausibility of dietary reporting. Misreporting of food intake, especially under-reporting, was common in children and adolescents with T1D: more than one-third of participants were classified as under-reporters and 10% as over-reporters. Age, z-BMI and male gender were associated with the risk of under-reporting (model R 2 = 0.5). Waist circumference was negatively associated with the risk of over-reporting (model R 2 = 0.25). Children and adolescents with T1D frequently under-report their food intake. Age, gender and z-BMI contribute to identify potential under-reporters. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN
NASA Astrophysics Data System (ADS)
Schneider, Evan; Impey, C.
2012-01-01
We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Determination of origin and intended use of plutonium metal using nuclear forensic techniques.
Rim, Jung H; Kuhn, Kevin J; Tandon, Lav; Xu, Ning; Porterfield, Donivan R; Worley, Christopher G; Thomas, Mariam R; Spencer, Khalil J; Stanley, Floyd E; Lujan, Elmer J; Garduno, Katherine; Trellue, Holly R
2017-04-01
Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials' properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240 Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modeling feedback and trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. Based on this investigation, the most likely intended use for these plutonium foils was 239 Pu fission foil targets for physics experiments, such as cross-section measurements, etc. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of origin and intended use of plutonium metal using nuclear forensic techniques
Rim, Jung H.; Kuhn, Kevin J.; Tandon, Lav; ...
2017-04-01
Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials’ properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modelling feedback andmore » trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. In conclusion, based on this investigation, the most likely intended use for these plutonium foils was 239Pu fission foil targets for physics experiments, such as cross-section measurements, etc.« less
Ab initio-aided CALPHAD thermodynamic modeling of the Sn-Pb binary system under current stressing
Lin, Shih-kang; Yeh, Chao-kuei; Xie, Wei; Liu, Yu-chen; Yoshimura, Masahiro
2013-01-01
Soldering is an ancient process, having been developed 5000 years ago. It remains a crucial process with many modern applications. In electronic devices, electric currents pass through solder joints. A new physical phenomenon – the supersaturation of solders under high electric currents – has recently been observed. It involves (1) un-expected supersaturation of the solder matrix phase, and (2) the formation of unusual “ring-shaped” grains. However, the origin of these phenomena is not yet understood. Here we provide a plausible explanation of these phenomena based on the changes in the phase stability of Pb-Sn solders. Ab initio-aided CALPHAD modeling is utilized to translate the electric current-induced effect into the excess Gibbs free energies of the phases. Hence, the phase equilibrium can be shifted by current stressing. The Pb-Sn phase diagrams with and without current stressing clearly demonstrate the change in the phase stabilities of Pb-Sn solders under current stressing. PMID:24060995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz, Gerardo, E-mail: ortizg@indiana.edu; Cobanera, Emilio
We investigate Majorana modes of number-conserving fermionic superfluids from both basic physics principles, and concrete models perspectives. After reviewing a criterion for establishing topological superfluidity in interacting systems, based on many-body fermionic parity switches, we reveal the emergence of zero-energy modes anticommuting with fermionic parity. Those many-body Majorana modes are constructed as coherent superpositions of states with different number of fermions. While realization of Majorana modes beyond mean field is plausible, we show that the challenge to quantum-control them is compounded by particle-conservation, and more realistic protocols will have to balance engineering needs with astringent constraints coming from superselection rules.more » Majorana modes in number-conserving systems are the result of a peculiar interplay between quantum statistics, fermionic parity, and an unusual form of spontaneous symmetry breaking. We test these ideas on the Richardson–Gaudin–Kitaev chain, a number-conserving model solvable by way of the algebraic Bethe ansatz, and equivalent in mean field to a long-range Kitaev chain.« less
González-Rodríguez, Liliana G.; Perea Sánchez, José Miguel; Aranceta-Bartrina, Javier; Gil, Ángel; González-Gross, Marcela; Serra-Majem, Lluis; Varela-Moreiras, Gregorio; Ortega, Rosa M.
2017-01-01
The aim was to study the intake and food sources of fibre in a representative sample of Spanish adults and to analyse its association with excess body weight and abdominal obesity. A sample of 1655 adults (18–64 years) from the ANIBES (“Anthropometric data, macronutrients and micronutrients intake, practice of physical activity, socioeconomic data and lifestyles”) cross-sectional study was analysed. Fibre intake and dietary food sources were determined by using a three-day dietary record. Misreporters were identified using the protocol of the European Food Safety Authority. Mean (standard deviation) fibre intake was 12.59 (5.66) g/day in the whole sample and 15.88 (6.29) g/day in the plausible reporters. Mean fibre intake, both in the whole sample and the plausible reporters, was below the adequate intake established by European Food Safety Authority (EFSA) and the Institute of Medicine of the United States (IOM). Main fibre dietary food sources were grains, followed by vegetables, fruits, and pulses. In the whole sample, considering sex, and after adjusting for age and physical activity, mean (standard error) fibre intake (adjusted by energy intake) was higher in subjects who had normal weight (NW) 13.40 (0.184) g/day, without abdominal obesity 13.56 (0.192) g/day or without excess body weight and/or abdominal obesity 13.56 (0.207) g/day compared to those who were overweight (OW) 12.31 (0.195) g/day, p < 0.001 or obese (OB) 11.83 (0.266) g/day, p < 0.001, with abdominal obesity 12.09 (0.157) g/day, p < 0.001 or with excess body weight and/or abdominal obesity 12.22 (0.148) g/day, p < 0.001. There were no significant differences in relation with the fibre intake according to the body mass index (BMI), presence or absence of abdominal obesity or excess body weight and/or abdominal obesity in the plausible reporters. Fibre from afternoon snacks was higher in subjects with NW (6.92%) and without abdominal obesity (6.97%) or without excess body weight and/or abdominal obesity (7.20%), than those with OW (5.30%), p < 0.05 or OB (4.79%), p < 0.05, with abdominal obesity (5.18%), p < 0.01, or with excess body weight and/or abdominal obesity (5.21%), p < 0.01, in the whole sample. Conversely, these differences were not observed in the plausible reporters. The present study demonstrates an insufficient fibre intake both in the whole sample and in the plausible reporters and confirms its association with excess body weight and abdominal obesity only when the whole sample was considered. PMID:28346353
González-Rodríguez, Liliana G; Perea Sánchez, José Miguel; Aranceta-Bartrina, Javier; Gil, Ángel; González-Gross, Marcela; Serra-Majem, Lluis; Varela-Moreiras, Gregorio; Ortega, Rosa M
2017-03-25
The aim was to study the intake and food sources of fibre in a representative sample of Spanish adults and to analyse its association with excess body weight and abdominal obesity. A sample of 1655 adults (18-64 years) from the ANIBES ("Anthropometric data, macronutrients and micronutrients intake, practice of physical activity, socioeconomic data and lifestyles") cross-sectional study was analysed. Fibre intake and dietary food sources were determined by using a three-day dietary record. Misreporters were identified using the protocol of the European Food Safety Authority. Mean (standard deviation) fibre intake was 12.59 (5.66) g/day in the whole sample and 15.88 (6.29) g/day in the plausible reporters. Mean fibre intake, both in the whole sample and the plausible reporters, was below the adequate intake established by European Food Safety Authority (EFSA) and the Institute of Medicine of the United States (IOM). Main fibre dietary food sources were grains, followed by vegetables, fruits, and pulses. In the whole sample, considering sex, and after adjusting for age and physical activity, mean (standard error) fibre intake (adjusted by energy intake) was higher in subjects who had normal weight (NW) 13.40 (0.184) g/day, without abdominal obesity 13.56 (0.192) g/day or without excess body weight and/or abdominal obesity 13.56 (0.207) g/day compared to those who were overweight (OW) 12.31 (0.195) g/day, p < 0.001 or obese (OB) 11.83 (0.266) g/day, p < 0.001, with abdominal obesity 12.09 (0.157) g/day, p < 0.001 or with excess body weight and/or abdominal obesity 12.22 (0.148) g/day, p < 0.001. There were no significant differences in relation with the fibre intake according to the body mass index (BMI), presence or absence of abdominal obesity or excess body weight and/or abdominal obesity in the plausible reporters. Fibre from afternoon snacks was higher in subjects with NW (6.92%) and without abdominal obesity (6.97%) or without excess body weight and/or abdominal obesity (7.20%), than those with OW (5.30%), p < 0.05 or OB (4.79%), p < 0.05, with abdominal obesity (5.18%), p < 0.01, or with excess body weight and/or abdominal obesity (5.21%), p < 0.01, in the whole sample. Conversely, these differences were not observed in the plausible reporters. The present study demonstrates an insufficient fibre intake both in the whole sample and in the plausible reporters and confirms its association with excess body weight and abdominal obesity only when the whole sample was considered.
NASA Technical Reports Server (NTRS)
Lewis, J. S.
1974-01-01
The bulk composition and interior structure of Titan required to explain the presence of a substantial methane atmosphere are shown to imply the presence of solid CH4 - 7H2O in Titan's primitive material. Consideration of the possible composition and structure of the present atmosphere shows plausible grounds for considering models with total atmospheric pressures ranging from approximately 20 mb up to approximately 1 kb. Expectations regarding the physical state of the surface and its chemical composition are strongly conditioned by the mass of atmosphere believed to be present. A surface of solid CH4, liquid CH4 solid, CH4 hydrate, H2O ice, aqueous NH3 solution, or even a non-surface of supercritical H2O-NH3-CH4 fluid could be rationalized.
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Implications of global warming for the climate of African rainforests
James, Rachel; Washington, Richard; Rowell, David P.
2013-01-01
African rainforests are likely to be vulnerable to changes in temperature and precipitation, yet there has been relatively little research to suggest how the regional climate might respond to global warming. This study presents projections of temperature and precipitation indices of relevance to African rainforests, using global climate model experiments to identify local change as a function of global temperature increase. A multi-model ensemble and two perturbed physics ensembles are used, one with over 100 members. In the east of the Congo Basin, most models (92%) show a wet signal, whereas in west equatorial Africa, the majority (73%) project an increase in dry season water deficits. This drying is amplified as global temperature increases, and in over half of coupled models by greater than 3% per °C of global warming. Analysis of atmospheric dynamics in a subset of models suggests that this could be partly because of a rearrangement of zonal circulation, with enhanced convection in the Indian Ocean and anomalous subsidence over west equatorial Africa, the Atlantic Ocean and, in some seasons, the Amazon Basin. Further research to assess the plausibility of this and other mechanisms is important, given the potential implications of drying in these rainforest regions. PMID:23878329
Is the GeV-TeV emission of PKS 0447-439 from the proton synchrotron radiation?
NASA Astrophysics Data System (ADS)
Gao, Quan-Gui; Lu, Fang-Wu; Ma, Ju; Ren, Ji-Yang; Li, Huai-Zhen
2018-06-01
We study the multi-wavelength emission features of PKS 0447-439 in the frame of the one-zone homogeneous lepto-hadronic model. In this model, we assumed that the steady power-laws with exponential cut-offs distributions of protons and electrons are injected into the source. The non-linear time-dependent kinematic equations, describing the evolution of protons, electrons and photons, are defined; these equations self-consistently involve synchrotron radiation of protons, photon-photon interaction, synchrotron radiation of electron/positron pairs, inverse Compton scattering and synchrotron self-absorption. The model is applied to reproduce the multi-wavelength spectrum of PKS 0447-439. Our results indicate that the spectral energy distribution (SED) of PKS 0447-439 can be reproduced well by the model. In particular, the GeV-TeV emission is produced by the synchrotron radiation of relativistic protons. The physically plausible solutions require the magnetic strength 10 G≲ B ≲ 100 G. We found that the observed spectrum of PKS 0447-439 can be reproduced well by the model whether z = 0.16 or z = 0.2, and the acceptable upper limit of redshift is z=0.343.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2016-01-01
In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.
Nodal liquids in extended t-J models and dynamical supersymmetry
NASA Astrophysics Data System (ADS)
Mavromatos, Nick E.; Sarkar, Sarben
2000-08-01
In the context of extended t-J models, with intersite Coulomb interactions of the form -V∑ninj, with ni denoting the electron number operator at site i, nodal liquids are discussed. We use the spin-charge separation ansatz as applied to the nodes of a d-wave superconducting gap. Such a situation may be of relevance to the physics of high-temperature superconductivity. We point out the possibility of existence of certain points in the parameter space of the model characterized by dynamical supersymmetries between the spinon and holon degrees of freedom, which are quite different from the symmetries in conventional supersymmetric t-J models. Such symmetries pertain to the continuum effective-field theory of the nodal liquid, and one's hope is that the ancestor lattice model may differ from the continuum theory only by renormalization-group irrelevant operators in the infrared. We give plausible arguments that nodal liquids at such supersymmetric points are characterized by superconductivity of Kosterlitz-Thouless type. The fact that quantum fluctuations around such points can be studied in a controlled way, probably makes such systems of special importance for an eventual nonperturbative understanding of the complex phase diagram of the associated high-temperature superconducting materials.
Implications of global warming for the climate of African rainforests.
James, Rachel; Washington, Richard; Rowell, David P
2013-01-01
African rainforests are likely to be vulnerable to changes in temperature and precipitation, yet there has been relatively little research to suggest how the regional climate might respond to global warming. This study presents projections of temperature and precipitation indices of relevance to African rainforests, using global climate model experiments to identify local change as a function of global temperature increase. A multi-model ensemble and two perturbed physics ensembles are used, one with over 100 members. In the east of the Congo Basin, most models (92%) show a wet signal, whereas in west equatorial Africa, the majority (73%) project an increase in dry season water deficits. This drying is amplified as global temperature increases, and in over half of coupled models by greater than 3% per °C of global warming. Analysis of atmospheric dynamics in a subset of models suggests that this could be partly because of a rearrangement of zonal circulation, with enhanced convection in the Indian Ocean and anomalous subsidence over west equatorial Africa, the Atlantic Ocean and, in some seasons, the Amazon Basin. Further research to assess the plausibility of this and other mechanisms is important, given the potential implications of drying in these rainforest regions.
Induced vibrations increase performance of a winged self-righting robot
NASA Astrophysics Data System (ADS)
Othayoth, Ratan; Xuan, Qihan; Li, Chen
When upside down, cockroaches can open their wings to dynamically self-right. In this process, an animal often has to perform multiple unsuccessful maneuvers to eventually right, and often flails its legs. Here, we developed a cockroach-inspired winged self-righting robot capable of controlled body vibrations to test the hypothesis that vibrations assist self-righting transitions. Robot body vibrations were induced by an oscillating mass (10% of body mass) and varied by changing oscillation frequency. We discovered that, as the robot's body vibrations increased, righting probability increased, and righting time decreased (P <0.0001, ANOVA), confirming our hypothesis. To begin to understand the underlying physics, we developed a locomotion energy landscape model. Our model revealed that the kinetic energy fluctuations due to vibrations were comparable to the potential energy barriers required to transition from a metastable overturned orientation to an upright orientation. Our study supports the plausibility of locomotion energy landscapes for understanding locomotor transitions, but highlights the need for further stochastic modeling to capture the uncertain nature of when righting maneuvers result in successful righting.
Maakip, Ismail; Keegel, Tessa; Oakman, Jodi
2016-03-01
Musculoskeletal disorders (MSDs) are a major occupational health issue for workers in developed and developing countries, including Malaysia. Most research related to MSDs has been undertaken in developed countries; given the different regulatory and cultural practices it is plausible that contributions of hazard and risk factors may be different. A population of Malaysian public service office workers were surveyed (N = 417, 65.5% response rate) to determine prevalence and associated predictors of MSD discomfort. The 6-month period prevalence of MSD discomfort was 92.8% (95%CI = 90.2-95.2%). Akaike's Information Criterion (AIC) analyses was used to compare a range of models and determine a model of best fit. Contributions associated with MSD discomfort in the final model consisted of physical demands (61%), workload (14%), gender (13%), work-home balance (9%) and psychosocial factors (3%). Factors associated with MSD discomfort were similar in developed and developing countries but the relative contribution of factors was different, providing insight into future development of risk management strategies. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
On the Importance of the Dynamics of Discretizations
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, H. C.; Rai, ManMohan (Technical Monitor)
1995-01-01
It has been realized recently that the discrete maps resulting from numerical discretizations of differential equations can possess asymptotic dynamical behavior quite different from that of the original systems. This is the case not only for systems of Ordinary Differential Equations (ODEs) but in a more complicated manner for Partial Differential Equations (PDEs) used to model complex physics. The impact of the modified dynamics may be mild and even not observed for some numerical methods. For other classes of discretizations the impact may be pronounced, but not always obvious depending on the nonlinear model equations, the time steps, the grid spacings and the initial conditions. Non-convergence or convergence to periodic solutions might be easily recognizable but convergence to incorrect but plausible solutions may not be so obvious - even for discretized parameters within the linearized stability constraint. Based on our past four years of research, we will illustrate some of the pathology of the dynamics of discretizations, its possible impact and the usage of these schemes for model nonlinear ODEs, convection-diffusion equations and grid adaptations.
Channelized subglacial drainage over a deformable bed
Walder, J.S.; Fowler, A.
1994-01-01
We develop theoretically a description of a possible subglacial drainage mechanism for glaciers and ice sheets moving over saturated, deformable till. The model is based on the plausible assumption that flow of water in a thin film at the ice-till interface is unstable to the formation of a channelized drainage system, and is restricted to the case in which meltwater cannot escape through the till to an underlying aquifer. In describing the physics of such channelized drainage, we have generalized and extended Rothlisberger's model of channels cut into basal ice to include "canals' cut into the till, paying particular attention to the role of sediment properties and the mechanics of sediment transport. We show that sediment-floored Rothlisberger (R) channels can exist for high effective pressures, and wide, shallow, ice-roofed canals cut into the till for low effective pressures. Canals should form a distributed, non-arborescent system, unlike R channels. Geologic evidence derived from land forms and deposits left by the Pleistocene ice sheets in North America and Europe is consistent with predictions of the model. -from Authors
Perceptions of domestic violence in lesbian relationships: stereotypes and gender role expectations.
Little, Betsi; Terrance, Cheryl
2010-01-01
In light of evidence suggesting that violence between lesbian couples is oftentimes dismissed as "mutually combative," expectations that support this perception were examined. Participants (N = 287) evaluated a domestic violence situation within the context of a lesbian partnership. As physical appearance may be used to support gender- and heterosexist-based stereotypes relating to lesbians, participants evaluated a domestic violence incident wherein the physical appearance of both the victim and perpetrator were systematically varied. Overall, women perceived the situation as more dangerous than did men. However, among women, the plausibility of the victim's claim, and blame assigned to the perpetrator and victim, varied as a function of the physical appearance of the couple. Implications of this research as well as future directions are discussed.
Of paradox and plausibility: the dynamic of change in medical law.
Harrington, John
2014-01-01
This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
Utilization of Prosodic Information in Syntactic Ambiguity Resolution
2010-01-01
Two self paced listening experiments examined the role of prosodic phrasing in syntactic ambiguity resolution. In Experiment 1, the stimuli consisted of early closure sentences (e.g., “While the parents watched, the child sang a song.”) containing transitive-biased subordinate verbs paired with plausible direct objects or intransitive-biased subordinate verbs paired with implausible direct objects. Experiment 2 also contained early closure sentences with transitively and intransitive-biased subordinate verbs, but the subordinate verbs were always followed by plausible direct objects. In both experiments, there were two prosodic conditions. In the subject-biased prosodic condition, an intonational phrase boundary marked the clausal boundary following the subordinate verb. In the object-biased prosodic condition, the clause boundary was unmarked. The results indicate that lexical and prosodic cues interact at the subordinate verb and plausibility further affects processing at the ambiguous noun. Results are discussed with respect to models of the role of prosody in sentence comprehension. PMID:20033849
Theories and models on the biological of cells in space
NASA Technical Reports Server (NTRS)
Todd, P.; Klaus, D. M.
1996-01-01
A wide variety of observations on cells in space, admittedly made under constraining and unnatural conditions in may cases, have led to experimental results that were surprising or unexpected. Reproducibility, freedom from artifacts, and plausibility must be considered in all cases, even when results are not surprising. The papers in symposium on 'Theories and Models on the Biology of Cells in Space' are dedicated to the subject of the plausibility of cellular responses to gravity -- inertial accelerations between 0 and 9.8 m/sq s and higher. The mechanical phenomena inside the cell, the gravitactic locomotion of single eukaryotic and prokaryotic cells, and the effects of inertial unloading on cellular physiology are addressed in theoretical and experimental studies.
2011-11-01
fusion energy -production processes of the particular type of reactor using a lithium (Li) blanket or related alloys such as the Pb-17Li eutectic. As such, tritium breeding is intimately connected with energy production, thermal management, radioactivity management, materials properties, and mechanical structures of any plausible future large-scale fusion power reactor. JASON is asked to examine the current state of scientific knowledge and engineering practice on the physical and chemical bases for large-scale tritium
What Is Life? What Was Life? What Will Life Be?
NASA Astrophysics Data System (ADS)
Deamer, D.
Our laboratory is exploring self-assembly processes and polymerization reactions of organic compounds in natural geothermal environments and related laboratory simulations. Although the physical environment that fostered primitive cellular life is still largely unconstrained, we can be reasonably confident that liquid water was required, together with a source of organic compounds and energy to drive polymerization reactions. There must also have been a process by which the compounds were sufficiently concentrated to undergo physical and chemical interactions. In earlier work we observed that macromolecules such as nucleic acids and proteins are readily encapsulated in membranous boundaries during wet-dry cycles such as those that would occur at the edges of geothermal springs or tide pools. The resulting structures are referred to as protocells, in that they exhibit certain properties of living cells and are models of the kinds of encapsulated macromolecular systems that would have led toward the first forms of cellular life. However, the assembly of protocells is markedly inhibited by conditions associated with extreme environments: High temperature, high salt concentrations, and low pH ranges. From a biophysical perspective, it follows that the most plausible planetary environment for the origin of cellular life would be an aqueous phase at moderate temperature ranges and low ionic strength, having a pH value near neutrality and divalent cations at submillimolar concentrations. This suggestion is in marked contrast to the view that life most likely began in a geothermal or marine environment, perhaps even the extreme environment of a hydrothermal vent. A more plausible site for the origin of cellular life would be fresh water pools maintained by rain falling on volcanic land masses resembling present-day Hawaii and Iceland. After the first cellular life was able to establish itself in a relatively benign environment, it would rapidly begin to adapt through Darwinian selection to more rigorous environments, including the extreme temperatures, salt concentrations and pH ranges that we now associate with the limits of life on the Earth.
Ren, Jie; Song, Kai; Deng, Minghua; Reinert, Gesine; Cannon, Charles H; Sun, Fengzhu
2016-04-01
Next-generation sequencing (NGS) technologies generate large amounts of short read data for many different organisms. The fact that NGS reads are generally short makes it challenging to assemble the reads and reconstruct the original genome sequence. For clustering genomes using such NGS data, word-count based alignment-free sequence comparison is a promising approach, but for this approach, the underlying expected word counts are essential.A plausible model for this underlying distribution of word counts is given through modeling the DNA sequence as a Markov chain (MC). For single long sequences, efficient statistics are available to estimate the order of MCs and the transition probability matrix for the sequences. As NGS data do not provide a single long sequence, inference methods on Markovian properties of sequences based on single long sequences cannot be directly used for NGS short read data. Here we derive a normal approximation for such word counts. We also show that the traditional Chi-square statistic has an approximate gamma distribution ,: using the Lander-Waterman model for physical mapping. We propose several methods to estimate the order of the MC based on NGS reads and evaluate those using simulations. We illustrate the applications of our results by clustering genomic sequences of several vertebrate and tree species based on NGS reads using alignment-free sequence dissimilarity measures. We find that the estimated order of the MC has a considerable effect on the clustering results ,: and that the clustering results that use a N: MC of the estimated order give a plausible clustering of the species. Our implementation of the statistics developed here is available as R package 'NGS.MC' at http://www-rcf.usc.edu/∼fsun/Programs/NGS-MC/NGS-MC.html fsun@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Bacharach, Samuel; Bamberger, Peter
1992-01-01
Survey data from 215 nurses (10 male) and 430 civil engineers (10 female) supported the plausibility of occupation-specific models (positing direct paths between role stressors, antecedents, and consequences) compared to generic models. A weakness of generic models is the tendency to ignore differences in occupational structure and culture. (SK)
Rohrmeier, Martin A; Cross, Ian
2014-07-01
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Sulfidic Anion Concentrations on Early Earth for Surficial Origins-of-Life Chemistry.
Ranjan, Sukrit; Todd, Zoe R; Sutherland, John D; Sasselov, Dimitar D
2018-04-08
A key challenge in origin-of-life studies is understanding the environmental conditions on early Earth under which abiogenesis occurred. While some constraints do exist (e.g., zircon evidence for surface liquid water), relatively few constraints exist on the abundances of trace chemical species, which are relevant to assessing the plausibility and guiding the development of postulated prebiotic chemical pathways which depend on these species. In this work, we combine literature photochemistry models with simple equilibrium chemistry calculations to place constraints on the plausible range of concentrations of sulfidic anions (HS - , HSO 3 - , SO 3 2- ) available in surficial aquatic reservoirs on early Earth due to outgassing of SO 2 and H 2 S and their dissolution into small shallow surface water reservoirs like lakes. We find that this mechanism could have supplied prebiotically relevant levels of SO 2 -derived anions, but not H 2 S-derived anions. Radiative transfer modeling suggests UV light would have remained abundant on the planet surface for all but the largest volcanic explosions. We apply our results to the case study of the proposed prebiotic reaction network of Patel et al. ( 2015 ) and discuss the implications for improving its prebiotic plausibility. In general, epochs of moderately high volcanism could have been especially conducive to cyanosulfidic prebiotic chemistry. Our work can be similarly applied to assess and improve the prebiotic plausibility of other postulated surficial prebiotic chemistries that are sensitive to sulfidic anions, and our methods adapted to study other atmospherically derived trace species. Key Words: Early Earth-Origin of life-Prebiotic chemistry-Volcanism-UV radiation-Planetary environments. Astrobiology 18, xxx-xxx.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
ERIC Educational Resources Information Center
Cangelosi, Angelo; Riga, Thomas
2006-01-01
The grounding of symbols in computational models of linguistic abilities is one of the fundamental properties of psychologically plausible cognitive models. In this article, we present an embodied model for the grounding of language in action based on epigenetic robots. Epigenetic robotics is one of the new cognitive modeling approaches to…
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew
2016-03-01
It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.
Curating blood: how students' and researchers' drawings bring potential phenomena to light
NASA Astrophysics Data System (ADS)
Hay, D. B.; Pitchford, S.
2016-11-01
This paper explores students and researchers drawings of white blood cell recruitment. The data combines interviews with exhibit of review-type academic images and analyses of student model-drawings. The analysis focuses on the material aspects of bio-scientific data-making and we use the literature of concrete bioscience modelling to differentiate the qualities of students model-making choices: novelty versus reproduction; completeness versus simplicity; and the achievement of similarity towards selected model targets. We show that while drawing on already published images, some third-year undergraduates are able to curate novel, and yet plausible causal channels in their graphic representations, implicating new phenomenal potentials as lead researchers do in their review-type academic publications. Our work links the virtues of drawing to learn to the disclosure of potential epistemic things, involving close attention to the contours of non-linguistic stuff and corresponding sensory perception of substance; space; time; shape and size; position; and force. The paper documents the authority and power students may achieve through making knowledge rather than repeating it. We show the ways in which drawing on the images elicited by others helps to develop physical, sensory, and sometimes affective relations towards the real and concrete world of scientific practice.
A Neural Circuit for Angular Velocity Computation
Snider, Samuel B.; Yuste, Rafael; Packer, Adam M.
2010-01-01
In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly tunable wing steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuromechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob. PMID:21228902
A neural circuit for angular velocity computation.
Snider, Samuel B; Yuste, Rafael; Packer, Adam M
2010-01-01
In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly tunable wing steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuromechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.
Constraining MHD Disk-Winds with X-ray Absorbers
NASA Astrophysics Data System (ADS)
Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.
2014-01-01
From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (< 2 keV). While the identified WAs are often mildly blueshifted to yield line-of-sight velocities up to ~100-3,000 km/sec in typical X-ray-bright Seyfert 1 AGNs, a fraction of Seyfert galaxies such as PG 1211+143 exhibits even faster absorbers (v/ 0.1-0.2) called ultra-fast outflows (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.
Models for attenuation in marine sediments that incorporate structural relaxation processes
NASA Astrophysics Data System (ADS)
Pierce, Allan D.; Carey, William M.; Lynch, James F.
2005-04-01
Biot's model leads to an attenuation coefficient at low frequencies that is proportional to ω2, and such is consistent with physical models of viscous attenuation of fluid flows through narrow constrictions driven by pressure differences between larger fluid pockets within the granular configuration. Much data suggests, however, that the attenuation coefficient is linear in ω for some sediments and over a wide range of frequencies. A common model that predicts such a dependence stems from theoretical work by Stoll and Bryan [J. Acoust. Soc. Am. 47, 1440 (1970)], in which the elastic constants of the solid frame are taken to be complex numbers, with small constant imaginary parts. Such invariably leads to a linear ω dependence at sufficiently low frequencies and this conflicts with common intuitive notions. The present paper incorporates structural relaxation, with a generalization of the formulations of Hall [Phys. Rev. 73, 775 (1948)] and Nachman, Smith, and Waag [J. Acoust. Soc. Am. 88, 1584 (1990)]. The mathematical form and plausibility of such is established, and it is shown that the dependence is as ω2 at low frequencies, and that a likely realization is one where the dependence is linear in ω at intermediate frequency ranges.
Relating triggering processes in lab experiments with earthquakes.
NASA Astrophysics Data System (ADS)
Baro Urbea, J.; Davidsen, J.; Kwiatek, G.; Charalampidou, E. M.; Goebel, T.; Stanchits, S. A.; Vives, E.; Dresen, G.
2016-12-01
Statistical relations such as Gutenberg-Richter's, Omori-Utsu's and the productivity of aftershocks were first observed in seismology, but are also common to other physical phenomena exhibiting avalanche dynamics such as solar flares, rock fracture, structural phase transitions and even stock market transactions. All these examples exhibit spatio-temporal correlations that can be explained as triggering processes: Instead of being activated as a response to external driving or fluctuations, some events are consequence of previous activity. Although different plausible explanations have been suggested in each system, the ubiquity of such statistical laws remains unknown. However, the case of rock fracture may exhibit a physical connection with seismology. It has been suggested that some features of seismology have a microscopic origin and are reproducible over a vast range of scales. This hypothesis has motivated mechanical experiments to generate artificial catalogues of earthquakes at a laboratory scale -so called labquakes- and under controlled conditions. Microscopic fractures in lab tests release elastic waves that are recorded as ultrasonic (kHz-MHz) acoustic emission (AE) events by means of piezoelectric transducers. Here, we analyse the statistics of labquakes recorded during the failure of small samples of natural rocks and artificial porous materials under different controlled compression regimes. Temporal and spatio-temporal correlations are identified in certain cases. Specifically, we distinguish between the background and triggered events, revealing some differences in the statistical properties. We fit the data to statistical models of seismicity. As a particular case, we explore the branching process approach simplified in the Epidemic Type Aftershock Sequence (ETAS) model. We evaluate the empirical spatio-temporal kernel of the model and investigate the physical origins of triggering. Our analysis of the focal mechanisms implies that the occurrence of the empirical laws extends well beyond purely frictional sliding events, in contrast to what is often assumed.
Jiang, Ping; Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun
2016-01-01
The development of a physiologically plausible computational model of a neural controller that can realize a human-like biped stance is important for a large number of potential applications, such as assisting device development and designing robotic control systems. In this paper, we develop a computational model of a neural controller that can maintain a musculoskeletal model in a standing position, while incorporating a 120-ms neurological time delay. Unlike previous studies that have used an inverted pendulum model, a musculoskeletal model with seven joints and 70 muscular-tendon actuators is adopted to represent the human anatomy. Our proposed neural controller is composed of both feed-forward and feedback controls. The feed-forward control corresponds to the constant activation input necessary for the musculoskeletal model to maintain a standing posture. This compensates for gravity and regulates stiffness. The developed neural controller model can replicate two salient features of the human biped stance: (1) physiologically plausible muscle activations for quiet standing; and (2) selection of a low active stiffness for low energy consumption. PMID:27655271
Macroecological analyses support an overkill scenario for late Pleistocene extinctions.
Diniz-Filho, J A F
2004-08-01
The extinction of megafauna at the end of Pleistocene has been traditionally explained by environmental changes or overexploitation by human hunting (overkill). Despite difficulties in choosing between these alternative (and not mutually exclusive) scenarios, the plausibility of the overkill hypothesis can be established by ecological models of predator-prey interactions. In this paper, I have developed a macroecological model for the overkill hypothesis, in which prey population dynamic parameters, including abundance, geographic extent, and food supply for hunters, were derived from empirical allometric relationships with body mass. The last output correctly predicts the final destiny (survival or extinction) for 73% of the species considered, a value only slightly smaller than those obtained by more complex models based on detailed archaeological and ecological data for each species. This illustrates the high selectivity of Pleistocene extinction in relation to body mass and confers more plausibility on the overkill scenario.
Multilevel models for estimating incremental net benefits in multinational studies.
Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John
2007-08-01
Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jansen, Peter A.; Watter, Scott
2012-03-01
Connectionist language modelling typically has difficulty with syntactic systematicity, or the ability to generalise language learning to untrained sentences. This work develops an unsupervised connectionist model of infant grammar learning. Following the semantic boostrapping hypothesis, the network distils word category using a developmentally plausible infant-scale database of grounded sensorimotor conceptual representations, as well as a biologically plausible semantic co-occurrence activation function. The network then uses this knowledge to acquire an early benchmark clausal grammar using correlational learning, and further acquires separate conceptual and grammatical category representations. The network displays strongly systematic behaviour indicative of the general acquisition of the combinatorial systematicity present in the grounded infant-scale language stream, outperforms previous contemporary models that contain primarily noun and verb word categories, and successfully generalises broadly to novel untrained sensorimotor grounded sentences composed of unfamiliar nouns and verbs. Limitations as well as implications to later grammar learning are discussed.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
The Solid Rocket Motor Slag Population: Results of a Radar-based Regressive Statistical Evaluation
NASA Technical Reports Server (NTRS)
Horstman, Matthew F.; Xu, Yu-Lin
2008-01-01
Solid rocket motor (SRM) slag has been identified as a significant source of man-made orbital debris. The propensity of SRMs to generate particles of 100 m and larger has caused concern regarding their contribution to the debris environment. Radar observation, rather than in-situ gathered evidence, is currently the only measurable source for the NASA/ODPO model of the on-orbit slag population. This simulated model includes the time evolution of the resultant orbital populations using a historical database of SRM launches, propellant masses, and estimated locations and times of tail-off. However, due to the small amount of observational evidence, there can be no direct comparison to check the validity of this model. Rather than using the assumed population developed from purely historical and physical assumptions, a regressional approach was used which utilized the populations observed by the Haystack radar from 1996 to present. The estimated trajectories from the historical model of slag sources, and the corresponding plausible detections by the Haystack radar, were identified. Comparisons with observational data from the ensuing years were made, and the SRM model was altered with respect to size and mass production of slag particles to reflect the historical data obtained. The result is a model SRM population that fits within the bounds of the observed environment.
NASA Astrophysics Data System (ADS)
Sathyanarayana Rao, Mayuri; Subrahmanyan, Ravi; Udaya Shankar, N.; Chluba, Jens
2017-05-01
Cosmic baryon evolution during the Cosmic Dawn and Reionization results in redshifted 21-cm spectral distortions in the cosmic microwave background (CMB). These encode information about the nature and timing of first sources over redshifts 30-6 and appear at meter wavelengths as a tiny CMB distortion along with the Galactic and extragalactic radio sky, which is orders of magnitude brighter. Therefore, detection requires precise methods to model foregrounds. We present a method of foreground fitting using maximally smooth (MS) functions. We demonstrate the usefulness of MS functions over traditionally used polynomials to separate foregrounds from the Epoch of Reionization (EoR) signal. We also examine the level of spectral complexity in plausible foregrounds using GMOSS, a physically motivated model of the radio sky, and find that they are indeed smooth and can be modeled by MS functions to levels sufficient to discern the vanilla model of the EoR signal. We show that MS functions are loss resistant and robustly preserve EoR signal strength and turning points in the residuals. Finally, we demonstrate that in using a well-calibrated spectral radiometer and modeling foregrounds with MS functions, the global EoR signal can be detected with a Bayesian approach with 90% confidence in 10 minutes’ integration.
Einstein Observatory coronal temperatures of late-type stars
NASA Technical Reports Server (NTRS)
Schmitt, J. H. M. M.; Collura, A.; Sciortino, S.; Vaiana, G. S.; Harnden, F. R., Jr.
1990-01-01
The results are presented of a survey of the coronal temperatures of late-type stars using the Einstein Observatory IPC. The spectral analysis shows that the frequently found one- and two-temperature descriptions are mainly influenced by the SNR of the data and that models using continuous emission measure distributions can provide equally adequate and physically more meaningful and more plausible descriptions. Intrinsic differences in differential emission measure distributions are found for four groups of stars. M dwarfs generally show evidence for high-temperature gas in conjunction with lower-temperature material, while main-sequence stars of types F and G have the high-temperature component either absent or very weak. Very hot coronae without the lower-temperature component appearing in dwarf stars are evident in most of the giant stars studied. RS CVn systems show evidence for extremely hot coronae, sometimes with no accompanying lower-temperature material.
NASA Astrophysics Data System (ADS)
Zhang, Xufang; Okamoto, Dai; Hatakeyama, Tetsuo; Sometani, Mitsuru; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi
2018-06-01
The impact of oxide thickness on the density distribution of near-interface traps (NITs) in SiO2/4H-SiC structure was investigated. We used the distributed circuit model that had successfully explained the frequency-dependent characteristics of both capacitance and conductance under strong accumulation conditions for SiO2/4H-SiC MOS capacitors with thick oxides by assuming an exponentially decaying distribution of NITs. In this work, it was found that the exponentially decaying distribution is the most plausible approximation of the true NIT distribution because it successfully explained the frequency dependences of capacitance and conductance under strong accumulation conditions for various oxide thicknesses. The thickness dependence of the NIT density distribution was also characterized. It was found that the NIT density increases with increasing oxide thickness, and a possible physical reason was discussed.
Did Martian Meteorites Come From These Sources?
NASA Astrophysics Data System (ADS)
Martel, L. M. V.
2007-01-01
Large rayed craters on Mars, not immediately obvious in visible light, have been identified in thermal infrared data obtained from the Thermal Emission Imaging System (THEMIS) onboard Mars Odyssey. Livio Tornabene (previously at the University of Tennessee, Knoxville and now at the University of Arizona, Tucson) and colleagues have mapped rayed craters primarily within young (Amazonian) volcanic plains in or near Elysium Planitia. They found that rays consist of numerous chains of secondary craters, their overlapping ejecta, and possibly primary ejecta from the source crater. Their work also suggests rayed craters may have formed preferentially in volatile-rich targets by oblique impacts. The physical details of the rayed craters and the target surfaces combined with current models of Martian meteorite delivery and cosmochemical analyses of Martian meteorites lead Tornabene and coauthors to conclude that these large rayed craters are plausible source regions for Martian meteorites.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
An automated approach to magnetic divertor configuration design
NASA Astrophysics Data System (ADS)
Blommaert, M.; Dekeyser, W.; Baelmans, M.; Gauger, N. R.; Reiter, D.
2015-01-01
Automated methods based on optimization can greatly assist computational engineering design in many areas. In this paper an optimization approach to the magnetic design of a nuclear fusion reactor divertor is proposed and applied to a tokamak edge magnetic configuration in a first feasibility study. The approach is based on reduced models for magnetic field and plasma edge, which are integrated with a grid generator into one sensitivity code. The design objective chosen here for demonstrative purposes is to spread the divertor target heat load as much as possible over the entire target area. Constraints on the separatrix position are introduced to eliminate physically irrelevant magnetic field configurations during the optimization cycle. A gradient projection method is used to ensure stable cost function evaluations during optimization. The concept is applied to a configuration with typical Joint European Torus (JET) parameters and it automatically provides plausible configurations with reduced heat load.
Thermal activation in Co/Sb nanoparticle-multilayer thin films
NASA Astrophysics Data System (ADS)
Madden, Michael R.
Multilayer "Co" /"Sb" thin films created via electron-beam physical vapor deposition are known to exhibit thermally activated dynamics. Scanning tunneling microscopy has indicated that the "Co" forms nanoparticles within an "Sb" matrix during deposition and subsequently forms nanowires by way of NP migration within the interstices of the confining layers. The electrical resistance of these systems decays during this irreversible aging process in a manner well-modeled by an Arrhenius law. Presently, this phenomenon is shown to possess some degree of tunability with respect to "Co" layer thickness tCo as well as deposition temperature Tdep , whereby characteristic timescales increase with either parameter. Furthermore, fluctuation timescales and activation energies seem to decrease and increase respectively with increasing t Co. An easily calibrated, one-time-use, time-temperature switch based on such systems lies within the realm of plausibility. The results presented here can be considered to be part of an ongoing development of the concept.
QUASI-PERIODICITIES AT YEAR-LIKE TIMESCALES IN BLAZARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandrinelli, A.; Treves, A.; Covino, S.
2016-03-15
We searched for quasi-periodicities on year-like timescales in the light curves of six blazars in the optical—near-infrared bands and we made a comparison with the high energy emission. We obtained optical/NIR light curves from Rapid Eye Mounting photometry plus archival Small and Moderate Aperture Research Telescope System data and we accessed the Fermi light curves for the γ-ray data. The periodograms often show strong peaks in the optical and γ-ray bands, which in some cases may be inter-related. The significance of the revealed peaks is then discussed, taking into account that the noise is frequency dependent. Quasi-periodicities on a year-likemore » timescale appear to occur often in blazars. No straightforward model describing these possible periodicities is yet available, but some plausible interpretations for the physical mechanisms causing periodic variabilities of these sources are examined.« less
Production of C-14 and neutrons in red giants
NASA Technical Reports Server (NTRS)
Cowan, J. J.; Rose, W. K.
1977-01-01
We have examined the effects of mixing various amounts of hydrogen-rich material into the intershell convective region of red giants undergoing helium shell flashes. We find that significant amounts of C-14 can be produced via the N-14(n, p)C-14 reaction. If substantial portions of this intershell region are mixed out into the envelopes of red giants, then C-14 may be detectable in evolved stars. We find a neutron flux many orders of magnitude above the flux required for the classical s-process, and thus an intermediate neutron process (i-process) may operate in evolved red giants. In all cases studied we find substantial enhancements of O-17. These mixing models offer a plausible explanation of the observations of enhanced O-17 in the carbon star IRC 10216. For certain physical conditions we find significant enhancements of N-15 in the intershell region.
Fundamental Limit of 1/f Frequency Noise in Semiconductor Lasers Due to Mechanical Thermal Noise
NASA Technical Reports Server (NTRS)
Numata, K.; Camp, J.
2011-01-01
So-called 1/f noise has power spectral density inversely proportional to frequency, and is observed in many physical processes. Single longitudinal-mode semiconductor lasers, used in variety of interferometric sensing applications, as well as coherent communications, exhibit 1/f frequency noise at low frequency (typically below 100kHz). Here we evaluate mechanical thermal noise due to mechanical dissipation in semiconductor laser components and give a plausible explanation for the widely-observed 1/f frequency noise, applying a methodology developed for fixed-spacer cavities for laser frequency stabilization. Semiconductor-laser's short cavity, small beam radius, and lossy components are expected to emphasize thermal-noise-limited frequency noise. Our simple model largely explains the different 1/f noise levels observed in various semiconductor lasers, and provides a framework where the noise may be reduced with proper design.
The Higgs seesaw induced neutrino masses and dark matter
Cai, Yi; Chao, Wei
2015-08-12
In this study we propose a possible explanation of the active neutrino Majorana masses with the TeV scale new physics which also provide a dark matter candidate. We extend the Standard Model (SM) with a local U(1)' symmetry and introduce a seesaw relation for the vacuum expectation values (VEVs) of the exotic scalar singlets, which break the U(1)' spontaneously. The larger VEV is responsible for generating the Dirac mass term of the heavy neutrinos, while the smaller for the Majorana mass term. As a result active neutrino masses are generated via the modified inverse seesaw mechanism. The lightest of themore » new fermion singlets, which are introduced to cancel the U(1)' anomalies, can be a stable particle with ultra flavor symmetry and thus a plausible dark matter candidate. We explore the parameter space with constraints from the dark matter relic abundance and dark matter direct detection.« less
Divertor target shape optimization in realistic edge plasma geometry
NASA Astrophysics Data System (ADS)
Dekeyser, W.; Reiter, D.; Baelmans, M.
2014-07-01
Tokamak divertor design for next-step fusion reactors heavily relies on numerical simulations of the plasma edge. Currently, the design process is mainly done in a forward approach, where the designer is strongly guided by his experience and physical intuition in proposing divertor shapes, which are then thoroughly assessed by numerical computations. On the other hand, automated design methods based on optimization have proven very successful in the related field of aerodynamic design. By recasting design objectives and constraints into the framework of a mathematical optimization problem, efficient forward-adjoint based algorithms can be used to automatically compute the divertor shape which performs the best with respect to the selected edge plasma model and design criteria. In the past years, we have extended these methods to automated divertor target shape design, using somewhat simplified edge plasma models and geometries. In this paper, we build on and extend previous work to apply these shape optimization methods for the first time in more realistic, single null edge plasma and divertor geometry, as commonly used in current divertor design studies. In a case study with JET-like parameters, we show that the so-called one-shot method is very effective is solving divertor target design problems. Furthermore, by detailed shape sensitivity analysis we demonstrate that the development of the method already at the present state provides physically plausible trends, allowing to achieve a divertor design with an almost perfectly uniform power load for our particular choice of edge plasma model and design criteria.
Fourth revolution in psychiatry - Addressing comorbidity with chronic physical disorders.
Gautam, Shiv
2010-07-01
The moral treatment of mental patients, Electro Convulsive therapy (ECT), and Psychotropic medications constitute the first, second, and third revolution in psychiatry, respectively. Addressing comorbidities of mental illnesses with chronic physical illnesses will be the fourth revolution in psychiatry. Mind and body are inseparable; there is a bidirectional relationship between psyche and soma, each influencing the other. Plausible biochemical explanations are appearing at an astonishing rate. Psychiatric comorbidity with many chronic physical disorders has remained neglected. Such comorbidity with cardiac, respiratory, Gastrointestinal, endocrinal, and neurological disorders, trauma, and other conditions like HIV and so on, needs to be addressed too. Evidence base of prevalence and causal relationship of psychiatric comorbidities in these disorders has been highlighted and strategies to meet the challenge of comorbidity have been indicated.
Fourth revolution in psychiatry – Addressing comorbidity with chronic physical disorders
Gautam, Shiv
2010-01-01
The moral treatment of mental patients, Electro Convulsive therapy (ECT), and Psychotropic medications constitute the first, second, and third revolution in psychiatry, respectively. Addressing comorbidities of mental illnesses with chronic physical illnesses will be the fourth revolution in psychiatry. Mind and body are inseparable; there is a bidirectional relationship between psyche and soma, each influencing the other. Plausible biochemical explanations are appearing at an astonishing rate. Psychiatric comorbidity with many chronic physical disorders has remained neglected. Such comorbidity with cardiac, respiratory, Gastrointestinal, endocrinal, and neurological disorders, trauma, and other conditions like HIV and so on, needs to be addressed too. Evidence base of prevalence and causal relationship of psychiatric comorbidities in these disorders has been highlighted and strategies to meet the challenge of comorbidity have been indicated. PMID:21180405
A physics-based model for maintenance of the pH gradient in the gastric mucus layer.
Lewis, Owen L; Keener, James P; Fogelson, Aaron L
2017-12-01
It is generally accepted that the gastric mucus layer provides a protective barrier between the lumen and the mucosa, shielding the mucosa from acid and digestive enzymes and preventing autodigestion of the stomach epithelium. However, the precise mechanisms that contribute to this protective function are still up for debate. In particular, it is not clear what physical processes are responsible for transporting hydrogen protons, secreted within the gastric pits, across the mucus layer to the lumen without acidifying the environment adjacent to the epithelium. One hypothesis is that hydrogen may be bound to the mucin polymers themselves as they are convected away from the mucosal surface and eventually degraded in the stomach lumen. It is also not clear what mechanisms prevent hydrogen from diffusing back toward the mucosal surface, thereby lowering the local pH. In this work we investigate a physics-based model of ion transport within the mucosal layer based on a Nernst-Planck-like equation. Analysis of this model shows that the mechanism of transporting protons bound to the mucus gel is capable of reproducing the trans-mucus pH gradients reported in the literature. Furthermore, when coupled with ion exchange at the epithelial surface, our analysis shows that bicarbonate secretion alone is capable of neutralizing the epithelial pH, even in the face of enormous diffusive gradients of hydrogen. Maintenance of the pH gradient is found to be robust to a wide array of perturbations in both physiological and phenomenological model parameters, suggesting a robust physiological control mechanism. NEW & NOTEWORTHY This work combines modeling techniques based on physical principles, as well as novel numerical simulations to test the plausibility of one hypothesized mechanism for proton transport across the gastric mucus layer. Results show that this mechanism is able to maintain the extreme pH gradient seen in in vivo experiments and suggests a highly robust regulation mechanism to maintain this gradient in the face of dynamic lumen composition. Copyright © 2017 the American Physiological Society.
Compact continuum brain model for human electroencephalogram
NASA Astrophysics Data System (ADS)
Kim, J. W.; Shin, H.-B.; Robinson, P. A.
2007-12-01
A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.
Robustness of predator-prey models for confinement regime transitions in fusion plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, H.; Chapman, S. C.; Department of Mathematics and Statistics, University of Tromso
2013-04-15
Energy transport and confinement in tokamak fusion plasmas is usually determined by the coupled nonlinear interactions of small-scale drift turbulence and larger scale coherent nonlinear structures, such as zonal flows, together with free energy sources such as temperature gradients. Zero-dimensional models, designed to embody plausible physical narratives for these interactions, can help to identify the origin of enhanced energy confinement and of transitions between confinement regimes. A prime zero-dimensional paradigm is predator-prey or Lotka-Volterra. Here, we extend a successful three-variable (temperature gradient; microturbulence level; one class of coherent structure) model in this genre [M. A. Malkov and P. H. Diamond,more » Phys. Plasmas 16, 012504 (2009)], by adding a fourth variable representing a second class of coherent structure. This requires a fourth coupled nonlinear ordinary differential equation. We investigate the degree of invariance of the phenomenology generated by the model of Malkov and Diamond, given this additional physics. We study and compare the long-time behaviour of the three-equation and four-equation systems, their evolution towards the final state, and their attractive fixed points and limit cycles. We explore the sensitivity of paths to attractors. It is found that, for example, an attractive fixed point of the three-equation system can become a limit cycle of the four-equation system. Addressing these questions which we together refer to as 'robustness' for convenience is particularly important for models which, as here, generate sharp transitions in the values of system variables which may replicate some key features of confinement transitions. Our results help to establish the robustness of the zero-dimensional model approach to capturing observed confinement phenomenology in tokamak fusion plasmas.« less
Bayraktar, Meriç; Männer, Jörg
2014-01-01
The transformation of the straight embryonic heart tube into a helically wound loop is named cardiac looping. Such looping is regarded as an essential process in cardiac morphogenesis since it brings the building blocks of the developing heart into an approximation of their definitive topographical relationships. During the past two decades, a large number of genes have been identified which play important roles in cardiac looping. However, how genetic information is physically translated into the dynamic form changes of the looping heart is still poorly understood. The oldest hypothesis of cardiac looping mechanics attributes the form changes of the heart loop (ventral bending → simple helical coiling → complex helical coiling) to compressive loads resulting from growth differences between the heart and the pericardial cavity. In the present study, we have tested the physical plausibility of this hypothesis, which we call the growth-induced buckling hypothesis, for the first time. Using a physical simulation model, we show that growth-induced buckling of a straight elastic rod within the confined space of a hemispherical cavity can generate the same sequence of form changes as observed in the looping embryonic heart. Our simulation experiments have furthermore shown that, under bilaterally symmetric conditions, growth-induced buckling generates left- and right-handed helices (D-/L-loops) in a 1:1 ratio, while even subtle left- or rightward displacements of the caudal end of the elastic rod at the pre-buckling state are sufficient to direct the buckling process toward the generation of only D- or L-loops, respectively. Our data are discussed with respect to observations made in biological “models.” We conclude that compressive loads resulting from unequal growth of the heart and pericardial cavity play important roles in cardiac looping. Asymmetric positioning of the venous heart pole may direct these forces toward a biased generation of D- or L-loops. PMID:24772086
Comparing Physics Scheme Performance for a Lake Effect Snowfall Event in Northern Lower Michigan
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Arnott, Justin M.
2012-01-01
High resolution forecast models, such as those used to predict severe convective storms, can also be applied to predictions of lake effect snowfall. A high resolution WRF model forecast model is provided to support operations at NWS WFO Gaylord, Michigan, using a 12 ]km and 4 ]km nested configuration. This is comparable to the simulations performed by other NWS WFOs adjacent to the Great Lakes, including offices in the NWS Eastern Region who participate in regional ensemble efforts. Ensemble efforts require diversity in initial conditions and physics configurations to emulate the plausible range of events in order to ascertain the likelihood of different forecast scenarios. In addition to providing probabilistic guidance, individual members can be evaluated to determine whether they appear to be biased in some way, or to better understand how certain physics configurations may impact the resulting forecast. On January 20 ]21, 2011, a lake effect snow event occurred in Northern Lower Michigan, with cooperative observing and CoCoRaHS stations reporting new snow accumulations between 2 and 8 inches and liquid equivalents of 0.1 ]0.25 h. The event of January 21, 2011 was particularly well observed, with numerous surface reports available. It was also well represented by the WRF configuration operated at NWS Gaylord. Given that the default configuration produced a reasonable prediction, it is used here to evaluate the impacts of other physics configurations on the resulting prediction of the primary lake effect band and resulting QPF. Emphasis here is on differences in planetary boundary layer and cloud microphysics parameterizations, given their likely role in determining the evolution of shallow convection and precipitation processes. Results from an ensemble of seven microphysics schemes and three planetary boundary layer schemes are presented to demonstrate variability in forecast evolution, with results used in an attempt to improve the forecasts in the 2011 ]2012 lake effect season.
Expanding the Role of Connectionism in SLA Theory
ERIC Educational Resources Information Center
Language Learning, 2013
2013-01-01
In this article, I explore how connectionism might expand its role in second language acquisition (SLA) theory by showing how some symbolic models of bilingual and second language lexical memory can be reduced to a biologically realistic (i.e., neurally plausible) connectionist model. This integration or hybridization of the two models follows the…
ERIC Educational Resources Information Center
Laszlo, Sarah; Plaut, David C.
2012-01-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…
Occupational Factors, Fatigue, and Cardiovascular Disease
2009-01-01
Purpose: Briefly identify the epidemiological evidence, propose pertinent mechanisms, and discuss physical therapy practice as well as research implications of a causal association between occupational factors and cardiovascular disease. Summary of Key Points: There is evidence that occupational metabolic demands and work organizations characterized by reduced worker control are associated with increased risk of cardiovascular disease. It is biologically plausible that these two factors interact to create a preclinical, intermediate state of fatigue (burnout) that is a critical component in the causal path from occupational factors to CVD. Physical therapists are uniquely qualified to contribute to an understanding of these mechanisms and their resultant implications for work organization, rehabilitation, and health promotion. Statement of Recommendations: Physical therapists engaged in ergonomic job analysis should consider work related metabolic demands, worker control, and fatigue in their assessment of risk for injury and illness, in recommendations for return to work, and in the prescription of health promotion leisure time physical activity PMID:20467535
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Brutality under cover of ambiguity: activating, perpetuating, and deactivating covert retributivism.
Fincher, Katrina M; Tetlock, Philip E
2015-05-01
Five studies tested four hypotheses on the drivers of punitive judgments. Study 1 showed that people imposed covertly retributivist physical punishments on extreme norm violators when they could plausibly deny that is what they were doing (attributional ambiguity). Studies 2 and 3 showed that covert retributivism could be suppressed by subtle accountability manipulations that cue people to the possibility that they might be under scrutiny. Studies 4 and 5 showed how covert retributivism can become self-sustaining by biasing the lessons people learn from experience. Covert retributivists did not scale back punitiveness in response to feedback that the justice system makes false-conviction errors but they did ramp up punitiveness in response to feedback that the system makes false-acquittal errors. Taken together, the results underscore the paradoxical nature of covert retributivism: It is easily activated by plausible deniability and persistent in the face of false-conviction feedback but also easily deactivated by minimalist forms of accountability. © 2015 by the Society for Personality and Social Psychology, Inc.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Hebbian Plasticity in CPG Controllers Facilitates Self-Synchronization for Human-Robot Handshaking.
Jouaiti, Melanie; Caron, Lancelot; Hénaff, Patrick
2018-01-01
It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony. This paper proposes a plausible model of plastic neural controllers that allows the emergence of synchronized movements in physical and rhythmical interactions. The controller is designed with central pattern generators (CPG) based on rhythmic Rowat-Selverston neurons endowed with neuronal and synaptic Hebbian plasticity. To demonstrate the interest of the proposed model, the case of handshaking is considered because it is a very common, both physically and socially, but also, a very complex act in the point of view of robotics, neuroscience and psychology. Plastic CPGs controllers are implemented in the joints of a simulated robotic arm that has to learn the frequency and amplitude of an external force applied to its effector, thus reproducing the act of handshaking with a human. Results show that the neural and synaptic Hebbian plasticity are working together leading to a natural and autonomous synchronization between the arm and the external force even if the frequency is changing during the movement. Moreover, a power consumption analysis shows that, by offering emergence of synchronized and coordinated movements, the plasticity mechanisms lead to a significant decrease in the energy spend by the robot actuators thus generating a more adaptive and natural human/robot handshake.
The Central Role of Recognition in Auditory Perception: A Neurobiological Model
ERIC Educational Resources Information Center
McLachlan, Neil; Wilson, Sarah
2010-01-01
The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…
Computational Approaches to Simulation and Analysis of Large Conformational Transitions in Proteins
NASA Astrophysics Data System (ADS)
Seyler, Sean L.
In a typical living cell, millions to billions of proteins--nanomachines that fluctuate and cycle among many conformational states--convert available free energy into mechanochemical work. A fundamental goal of biophysics is to ascertain how 3D protein structures encode specific functions, such as catalyzing chemical reactions or transporting nutrients into a cell. Protein dynamics span femtosecond timescales (i.e., covalent bond oscillations) to large conformational transition timescales in, and beyond, the millisecond regime (e.g., glucose transport across a phospholipid bilayer). Actual transition events are fast but rare, occurring orders of magnitude faster than typical metastable equilibrium waiting times. Equilibrium molecular dynamics (EqMD) can capture atomistic detail and solute-solvent interactions, but even microseconds of sampling attainable nowadays still falls orders of magnitude short of transition timescales, especially for large systems, rendering observations of such "rare events" difficult or effectively impossible. Advanced path-sampling methods exploit reduced physical models or biasing to produce plausible transitions while balancing accuracy and efficiency, but quantifying their accuracy relative to other numerical and experimental data has been challenging. Indeed, new horizons in elucidating protein function necessitate that present methodologies be revised to more seamlessly and quantitatively integrate a spectrum of methods, both numerical and experimental. In this dissertation, experimental and computational methods are put into perspective using the enzyme adenylate kinase (AdK) as an illustrative example. We introduce Path Similarity Analysis (PSA)--an integrative computational framework developed to quantify transition path similarity. PSA not only reliably distinguished AdK transitions by the originating method, but also traced pathway differences between two methods back to charge-charge interactions (neglected by the stereochemical model, but not the all-atom force field) in several conserved salt bridges. Cryo-electron microscopy maps of the transporter Bor1p are directly incorporated into EqMD simulations using MD flexible fitting to produce viable structural models and infer a plausible transport mechanism. Conforming to the theme of integration, a short compendium of an exploratory project--developing a hybrid atomistic-continuum method--is presented, including initial results and a novel fluctuating hydrodynamics model and corresponding numerical code.
Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.
Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin
2017-05-15
The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.
Pedestrian evacuation modeling to reduce vehicle use for distant tsunami evacuations in Hawaiʻi
Wood, Nathan J.; Jones, Jamie; Peters, Jeff; Richards, Kevin
2018-01-01
Tsunami waves that arrive hours after generation elsewhere pose logistical challenges to emergency managers due to the perceived abundance of time and inclination of evacuees to use vehicles. We use coastal communities on the island of Oʻahu (Hawaiʻi, USA) to demonstrate regional evacuation modeling that can identify where successful pedestrian-based evacuations are plausible and where vehicle use could be discouraged. The island of Oʻahu has two tsunami-evacuation zones (standard and extreme), which provides the opportunity to examine if recommended travel modes vary based on zone. Geospatial path distance models are applied to estimate population exposure as a function of pedestrian travel time and speed out of evacuation zones. The use of the extreme zone triples the number of residents, employees, and facilities serving at-risk populations that would be encouraged to evacuate and slightly reduces the percentage of residents (98–76%) that could evacuate in less than 15 min at a plausible speed (with similar percentages for employees). Areas with lengthy evacuations are concentrated in the North Shore region for the standard zone but found all around the Oʻahu coastline for the extreme zone. The use of the extreme zone results in a 26% increase in the number of hotel visitors that would be encouraged to evacuate, and a 76% increase in the number of them that may require more than 15 min. Modeling can identify where pedestrian evacuations are plausible; however, there are logistical and behavioral issues that warrant attention before localized evacuation procedures may be realistic.
Nutrition Implications for Fetal Alcohol Spectrum Disorder12
Young, Jennifer K.; Giesbrecht, Heather E.; Eskin, Michael N.; Aliani, Michel; Suh, Miyoung
2014-01-01
Prenatal alcohol exposure produces a multitude of detrimental alcohol-induced defects in children collectively known as fetal alcohol spectrum disorder (FASD). Children with FASD often exhibit delayed or abnormal mental, neural, and physical growth. Socioeconomic status, race, genetics, parity, gravidity, age, smoking, and alcohol consumption patterns are all factors that may influence FASD. Optimal maternal nutritional status is of utmost importance for proper fetal development, yet is often altered with alcohol consumption. It is critical to determine a means to resolve and reduce the physical and neurological malformations that develop in the fetus as a result of prenatal alcohol exposure. Because there is a lack of information on the role of nutrients and prenatal nutrition interventions for FASD, the focus of this review is to provide an overview of nutrients (vitamin A, docosahexaenoic acid, folic acid, zinc, choline, vitamin E, and selenium) that may prevent or alleviate the development of FASD. Results from various nutrient supplementation studies in animal models and FASD-related research conducted in humans provide insight into the plausibility of prenatal nutrition interventions for FASD. Further research is necessary to confirm positive results, to determine optimal amounts of nutrients needed in supplementation, and to investigate the collective effects of multiple-nutrient supplementation. PMID:25398731
Lindeman, Meghan I H; Zengel, Bettina; Skowronski, John J
2017-07-01
The affect associated with negative (or unpleasant) memories typically tends to fade faster than the affect associated with positive (or pleasant) memories, a phenomenon called the fading affect bias (FAB). We conducted a study to explore the mechanisms related to the FAB. A retrospective recall procedure was used to obtain three self-report measures (memory vividness, rehearsal frequency, affective fading) for both positive events and negative events. Affect for positive events faded less than affect for negative events, and positive events were recalled more vividly than negative events. The perceived vividness of an event (memory vividness) and the extent to which an event has been rehearsed (rehearsal frequency) were explored as possible mediators of the relation between event valence and affect fading. Additional models conceived of affect fading and rehearsal frequency as contributors to a memory's vividness. Results suggested that memory vividness was a plausible mediator of the relation between an event's valence and affect fading. Rehearsal frequency was also a plausible mediator of this relation, but only via its effects on memory vividness. Additional modelling results suggested that affect fading and rehearsal frequency were both plausible mediators of the relation between an event's valence and the event's rated memory vividness.
Vectorial Representations of Meaning for a Computational Model of Language Comprehension
ERIC Educational Resources Information Center
Wu, Stephen Tze-Inn
2010-01-01
This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that--models. To the degree that they miss out on information that humans would tap into, they may be improved by considering…
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
The role of building models in the evaluation of heat-related risks
NASA Astrophysics Data System (ADS)
Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix
2016-04-01
Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.
Deng; Zhang; Zhang; ...
2016-04-11
The jet composition and energy dissipation mechanism of gamma-ray bursts (GRBs) and blazars are fundamental questions that remain not fully understood. One plausible model is to interpret the γ-ray emission of GRBs and optical emission of blazars as synchrotron radiation of electrons accelerated from the collision-induced magnetic dissipation regions in Poynting-flux-dominated jets. The polarization observation is an important and independent information to test this model. Based on our recent 3D relativistic MHD simulations of collision-induced magnetic dissipation of magnetically dominated blobs, here we perform calculations of the polarization properties of the emission in the dissipation region and apply the resultsmore » to model the polarization observational data of GRB prompt emission and blazar optical emission. In this article, we show that the same numerical model with different input parameters can reproduce well the observational data of both GRBs and blazars, especially the 90° polarization angle (PA) change in GRB 100826A and the 180° PA swing in blazar 3C279. This supports a unified model for GRB and blazar jets, suggesting that collision-induced magnetic reconnection is a common physical mechanism to power the relativistic jet emission from events with very different black hole masses.« less
Colard, Stéphane; O’Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D.
2014-01-01
There has been rapid growth in the use of electronic cigarettes (“vaping”) in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents. PMID:25547398
Jones, Michael L.; Shuter, Brian J.; Zhao, Yingming; Stockwell, Jason D.
2006-01-01
Future changes to climate in the Great Lakes may have important consequences for fisheries. Evidence suggests that Great Lakes air and water temperatures have risen and the duration of ice cover has lessened during the past century. Global circulation models (GCMs) suggest future warming and increases in precipitation in the region. We present new evidence that water temperatures have risen in Lake Erie, particularly during summer and winter in the period 19652000. GCM forecasts coupled with physical models suggest lower annual runoff, less ice cover, and lower lake levels in the future, but the certainty of these forecasts is low. Assessment of the likely effects of climate change on fish stocks will require an integrative approach that considers several components of habitat rather than water temperature alone. We recommend using mechanistic models that couple habitat conditions to population demographics to explore integrated effects of climate-caused habitat change and illustrate this approach with a model for Lake Erie walleye (Sander vitreum). We show that the combined effect on walleye populations of plausible changes in temperature, river hydrology, lake levels, and light penetration can be quite different from that which would be expected based on consideration of only a single factor.
Counterfactual Plausibility and Comparative Similarity.
Stanley, Matthew L; Stewart, Gregory W; Brigard, Felipe De
2017-05-01
Counterfactual thinking involves imagining hypothetical alternatives to reality. Philosopher David Lewis (1973, 1979) argued that people estimate the subjective plausibility that a counterfactual event might have occurred by comparing an imagined possible world in which the counterfactual statement is true against the current, actual world in which the counterfactual statement is false. Accordingly, counterfactuals considered to be true in possible worlds comparatively more similar to ours are judged as more plausible than counterfactuals deemed true in possible worlds comparatively less similar. Although Lewis did not originally develop his notion of comparative similarity to be investigated as a psychological construct, this study builds upon his idea to empirically investigate comparative similarity as a possible psychological strategy for evaluating the perceived plausibility of counterfactual events. More specifically, we evaluate judgments of comparative similarity between episodic memories and episodic counterfactual events as a factor influencing people's judgments of plausibility in counterfactual simulations, and we also compare it against other factors thought to influence judgments of counterfactual plausibility, such as ease of simulation and prior simulation. Our results suggest that the greater the perceived similarity between the original memory and the episodic counterfactual event, the greater the perceived plausibility that the counterfactual event might have occurred. While similarity between actual and counterfactual events, ease of imagining, and prior simulation of the counterfactual event were all significantly related to counterfactual plausibility, comparative similarity best captured the variance in ratings of counterfactual plausibility. Implications for existing theories on the determinants of counterfactual plausibility are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Dynamic causal modelling: a critical review of the biophysical and statistical foundations.
Daunizeau, J; David, O; Stephan, K E
2011-09-15
The goal of dynamic causal modelling (DCM) of neuroimaging data is to study experimentally induced changes in functional integration among brain regions. This requires (i) biophysically plausible and physiologically interpretable models of neuronal network dynamics that can predict distributed brain responses to experimental stimuli and (ii) efficient statistical methods for parameter estimation and model comparison. These two key components of DCM have been the focus of more than thirty methodological articles since the seminal work of Friston and colleagues published in 2003. In this paper, we provide a critical review of the current state-of-the-art of DCM. We inspect the properties of DCM in relation to the most common neuroimaging modalities (fMRI and EEG/MEG) and the specificity of inference on neural systems that can be made from these data. We then discuss both the plausibility of the underlying biophysical models and the robustness of the statistical inversion techniques. Finally, we discuss potential extensions of the current DCM framework, such as stochastic DCMs, plastic DCMs and field DCMs. Copyright © 2009 Elsevier Inc. All rights reserved.
A Synchronization Account of False Recognition
ERIC Educational Resources Information Center
Johns, Brendan T.; Jones, Michael N.; Mewhort, Douglas J. K.
2012-01-01
We describe a computational model to explain a variety of results in both standard and false recognition. A key attribute of the model is that it uses plausible semantic representations for words, built through exposure to a linguistic corpus. A study list is encoded in the model as a gist trace, similar to the proposal of fuzzy trace theory…
NASA Astrophysics Data System (ADS)
Keane, J. T.; Johnson, B. C.; Matsuyama, I.; Siegler, M. A.
2018-04-01
New geophysical data and numerical models reveal that basin-scale impacts routinely caused the Moon to tumble (non principal axis rotation) early in its history — plausibly driving magnetic fields, erasing primordial volatiles, and more.
A combined radio and GeV γ-ray view of the 2012 and 2013 flares of Mrk 421
Hovatta, Talvikki; Petropoulou, M.; Richards, J. L.; ...
2015-03-09
In 2012 Markarian 421 underwent the largest flare ever observed in this blazar at radio frequencies. In the present study, we start exploring this unique event and compare it to a less extreme event in 2013. We use 15 GHz radio data obtained with the Owens Valley Radio Observatory 40-m telescope, 95 GHz millimetre data from the Combined Array for Research in Millimeter-Wave Astronomy, and GeV γ-ray data from the Fermi Gamma-ray Space Telescope. Here, the radio light curves during the flaring periods in 2012 and 2013 have very different appearances, in both shape and peak flux density. Assuming thatmore » the radio and γ-ray flares are physically connected, we attempt to model the most prominent sub-flares of the 2012 and 2013 activity periods by using the simplest possible theoretical framework. We first fit a one-zone synchrotron self-Compton (SSC) model to the less extreme 2013 flare and estimate parameters describing the emission region. We then model the major γ-ray and radio flares of 2012 using the same framework. The 2012 γ-ray flare shows two distinct spikes of similar amplitude, so we examine scenarios associating the radio flare with each spike in turn. In the first scenario, we cannot explain the sharp radio flare with a simple SSC model, but we can accommodate this by adding plausible time variations to the Doppler beaming factor. In the second scenario, a varying Doppler factor is not needed, but the SSC model parameters require fine-tuning. Both alternatives indicate that the sharp radio flare, if physically connected to the preceding γ-ray flares, can be reproduced only for a very specific choice of parameters.« less
NASA Astrophysics Data System (ADS)
Mauritsen, Thorsten; Stevens, Bjorn
2015-05-01
Equilibrium climate sensitivity to a doubling of CO2 falls between 2.0 and 4.6 K in current climate models, and they suggest a weak increase in global mean precipitation. Inferences from the observational record, however, place climate sensitivity near the lower end of this range and indicate that models underestimate some of the changes in the hydrological cycle. These discrepancies raise the possibility that important feedbacks are missing from the models. A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space. This so-called iris effect could constitute a negative feedback that is not included in climate models. We find that inclusion of such an effect in a climate model moves the simulated responses of both temperature and the hydrological cycle to rising atmospheric greenhouse gas concentrations closer to observations. Alternative suggestions for shortcomings of models -- such as aerosol cooling, volcanic eruptions or insufficient ocean heat uptake -- may explain a slow observed transient warming relative to models, but not the observed enhancement of the hydrological cycle. We propose that, if precipitating convective clouds are more likely to cluster into larger clouds as temperatures rise, this process could constitute a plausible physical mechanism for an iris effect.
Reconstructing Climate Change: The Model-Data Ping-Pong
NASA Astrophysics Data System (ADS)
Stocker, T. F.
2017-12-01
When Cesare Emiliani, the father of paleoceanography, made the first attempts at a quantitative reconstruction of Pleistocene climate change in the early 1950s, climate models were not yet conceived. The understanding of paleoceanographic records was therefore limited, and scientists had to resort to plausibility arguments to interpret their data. With the advent of coupled climate models in the early 1970s, for the first time hypotheses about climate processes and climate change could be tested in a dynamically consistent framework. However, only a model hierarchy can cope with the long time scales and the multi-component physical-biogeochemical Earth System. There are many examples how climate models have inspired the interpretation of paleoclimate data on the one hand, and conversely, how data have questioned long-held concepts and models. In this lecture I critically revisit a few examples of this model-data ping-pong, such as the bipolar seesaw, the mid-Holocene greenhouse gas increase, millennial and rapid CO2 changes reconstructed from polar ice cores, and the interpretation of novel paleoceanographic tracers. These examples also highlight many of the still unsolved questions and provide guidance for future research. The combination of high-resolution paleoceanographic data and modeling has never been more relevant than today. It will be the key for an appropriate risk assessment of impacts on the Earth System that are already underway in the Anthropocene.
NASA Astrophysics Data System (ADS)
Gallovič, F.
2017-09-01
Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.
A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...
MODELING WILDLIFE RESPONSE TO LANDSCAPE CHANGE IN OREGON'S WILLAMETTE RIVER BASIN
The PATCH simulation model was used to predict the response of 17 wildlife species to
three plausible scenarios of habitat change in Oregon's Willamette River Basin. This 30
thousand square-kilometer basin comprises about 12% of the state of Oregon, encompasses extensive f...
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
A possible close supermassive black-hole binary in a quasar with optical periodicity.
Graham, Matthew J; Djorgovski, S G; Stern, Daniel; Glikman, Eilat; Drake, Andrew J; Mahabal, Ashish A; Donalek, Ciro; Larson, Steve; Christensen, Eric
2015-02-05
Quasars have long been known to be variable sources at all wavelengths. Their optical variability is stochastic and can be due to a variety of physical mechanisms; it is also well-described statistically in terms of a damped random walk model. The recent availability of large collections of astronomical time series of flux measurements (light curves) offers new data sets for a systematic exploration of quasar variability. Here we report the detection of a strong, smooth periodic signal in the optical variability of the quasar PG 1302-102 with a mean observed period of 1,884 ± 88 days. It was identified in a search for periodic variability in a data set of light curves for 247,000 known, spectroscopically confirmed quasars with a temporal baseline of about 9 years. Although the interpretation of this phenomenon is still uncertain, the most plausible mechanisms involve a binary system of two supermassive black holes with a subparsec separation. Such systems are an expected consequence of galaxy mergers and can provide important constraints on models of galaxy formation and evolution.
The thermochemical structure and evolution of Earth's mantle: constraints and numerical models.
Tackley, Paul J; Xie, Shunxing
2002-11-15
Geochemical observations place several constraints on geophysical processes in the mantle, including a requirement to maintain several distinct reservoirs. Geophysical constraints limit plausible physical locations of these reservoirs to a thin basal layer, isolated deep 'piles' of material under large-scale mantle upwellings, high-viscosity blobs/plums or thin strips throughout the mantle, or some combination of these. A numerical model capable of simulating the thermochemical evolution of the mantle is introduced. Preliminary simulations are more differentiated than Earth but display some of the proposed thermochemical processes, including the generation of a high-mu mantle reservoir by recycling of crust, and the generation of a high-(3)He/(4)He reservoir by recycling of residuum, although the resulting high-(3)He/(4)He material tends to aggregate near the top, where mid-ocean-ridge melting should sample it. If primitive material exists as a dense basal layer, it must be much denser than subducted crust in order to retain its primitive (e.g. high-(3)He) signature. Much progress is expected in the near future.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Core formation in the shergottite parent body and comparison with the earth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treiman, A.H.; Jones, J.H.; Drake, M.J.
1987-03-30
The mantle of the shergottite parent body (SPB) is depleted relative to the bulk SPB in siderophile and chalcophile elements; these elements are inferred to reside in the SPB's core. Our chemical model of these depletions rests on a physically plausible process of segregation of partially molten metal form partially molten silicates as the SPB grows and is heated above silicate and metallic solidi during accretion. Metallic and silicate phases equilibrate at low pressures as new material is accreted to the SPB surface. Later movement of the metallic phases to the planet's center is so rapid that high-pressure equilibration ismore » insignificant. Partitioning of siderophile and chalcophile elements among solid and liquid metal and silicate determines their abundances in the SPB mantle. Using partition coefficients and the SPB mantle composition determined in earlier studies, we model the abundances of Ag, Au, Co, Ga, Mo, Ni, P, Re, S, and W with free parameters being oxygen fugacity, proportion of solid metal formed, proportion of metallic liquid formed, and proportion of silicate that is molten.« less
NASA Astrophysics Data System (ADS)
Hakkarainen, Elina; Sihvonen, Teemu; Lappalainen, Jari
2017-06-01
Supercritical carbon dioxide (sCO2) has recently gained a lot of interest as a working fluid in different power generation applications. For concentrated solar power (CSP) applications, sCO2 provides especially interesting option if it could be used both as the heat transfer fluid (HTF) in the solar field and as the working fluid in the power conversion unit. This work presents development of a dynamic model of CSP plant concept, in which sCO2 is used for extracting the solar heat in Linear Fresnel collector field, and directly applied as the working fluid in the recuperative Brayton cycle; these both in a single flow loop. We consider the dynamic model is capable to predict the system behavior in typical operational transients in a physically plausible way. The novel concept was tested through simulation cases under different weather conditions. The results suggest that the concept can be successfully controlled and operated in the supercritical region to generate electric power during the daytime, and perform start-up and shut down procedures in order to stay overnight in sub-critical conditions. Besides the normal daily operation, the control system was demonstrated to manage disturbances due to sudden irradiance changes.
On a Possible Unified Scaling Law for Volcanic Eruption Durations
Cannavò, Flavio; Nunnari, Giuseppe
2016-01-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour. PMID:26926425
On a Possible Unified Scaling Law for Volcanic Eruption Durations.
Cannavò, Flavio; Nunnari, Giuseppe
2016-03-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour.
Synek, Alexander; Pahr, Dieter H
2018-06-01
A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.
Minimum and Maximum Potential Contributions to Future Sea Level Rise from Polar Ice Sheets
NASA Astrophysics Data System (ADS)
Deconto, R. M.; Pollard, D.
2017-12-01
New climate and ice-sheet modeling, calibrated to past changes in sea-level, is painting a stark picture of the future fate of the great polar ice sheets if greenhouse gas emissions continue unabated. This is especially true for Antarctica, where a substantial fraction of the ice sheet rests on bedrock more than 500-meters below sea level. Here, we explore the sensitivity of the polar ice sheets to a warming atmosphere and ocean under a range of future greenhouse gas emissions scenarios. The ice sheet-climate-ocean model used here considers time-evolving changes in surface mass balance and sub-ice oceanic melting, ice deformation, grounding line retreat on reverse-sloped bedrock (Marine Ice Sheet Instability), and newly added processes including hydrofracturing of ice shelves in response to surface meltwater and rain, and structural collapse of thick, marine-terminating ice margins with tall ice-cliff faces (Marine Ice Cliff Instability). The simulations improve on previous work by using 1) improved atmospheric forcing from a Regional Climate Model and 2) a much wider range of model physical parameters within the bounds of modern observations of ice dynamical processes (particularly calving rates) and paleo constraints on past ice-sheet response to warming. Approaches to more precisely define the climatic thresholds capable of triggering rapid and potentially irreversible ice-sheet retreat are also discussed, as is the potential for aggressive mitigation strategies like those discussed at the 2015 Paris Climate Conference (COP21) to substantially reduce the risk of extreme sea-level rise. These results, including physics that consider both ice deformation (creep) and calving (mechanical failure of marine terminating ice) expand on previously estimated limits of maximum rates of future sea level rise based solely on kinematic constraints of glacier flow. At the high end, the new results show the potential for more than 2m of global mean sea level rise by 2100, implying that physically plausible upper limits on future sea-level rise might need to be reconsidered.
Newberry Volcano EGS Demonstration Stimulation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trenton T. Cladouhos, Matthew Clyne, Maisie Nichols,; Susan Petty, William L. Osborn, Laura Nofziger
2011-10-23
As a part of Phase I of the Newberry Volcano EGS Demonstration project, several data sets were collected to characterize the rock volume around the well. Fracture, fault, stress, and seismicity data has been collected by borehole televiewer, LiDAR elevation maps, and microseismic monitoring. Well logs and cuttings from the target well (NWG 55-29) and core from a nearby core hole (USGS N-2) have been analyzed to develop geothermal, geochemical, mineralogical and strength models of the rock matrix, altered zones, and fracture fillings (see Osborn et al., this volume). These characterization data sets provide inputs to models used to planmore » and predict EGS reservoir creation and productivity. One model used is AltaStim, a stochastic fracture and flow software model developed by AltaRock. The software's purpose is to model and visualize EGS stimulation scenarios and provide guidance for final planning. The process of creating an AltaStim model requires synthesis of geologic observations at the well, the modeled stress conditions, and the stimulation plan. Any geomechanical model of an EGS stimulation will require many assumptions and unknowns; thus, the model developed here should not be considered a definitive prediction, but a plausible outcome given reasonable assumptions. AltaStim is a tool for understanding the effect of known constraints, assumptions, and conceptual models on plausible outcomes.« less
Hirsch, Philipp E; Adrian-Kalchhauser, Irene; Flämig, Sylvie; N'Guyen, Anouk; Defila, Rico; Di Giulio, Antonietta; Burkhardt-Holm, Patricia
2016-02-01
Non-native invasive species are a major threat to biodiversity, especially in freshwater ecosystems. Freshwater ecosystems are naturally rather isolated from one another. Nonetheless, invasive species often spread rapidly across water sheds. This spread is to a large extent realized by human activities that provide vectors. For example, recreational boats can carry invasive species propagules as "aquatic hitch-hikers" within and across water sheds. We used invasive gobies in Switzerland as a case study to test the plausibility that recreational boats can serve as vectors for invasive fish and that fish eggs can serve as propagules. We found that the peak season of boat movements across Switzerland and the goby spawning season overlap temporally. It is thus plausible that goby eggs attached to boats, anchors, or gear may be transported across watersheds. In experimental trials, we found that goby eggs show resistance to physical removal (90 mN attachment strength of individual eggs) and stay attached if exposed to rapid water flow (2.8 m·s(-1)for 1 h). When exposing the eggs to air, we found that hatching success remained high (>95%) even after eggs had been out of water for up to 24 h. It is thus plausible that eggs survive pick up, within-water and overland transport by boats. We complemented the experimental plausibility tests with a survey on how decision makers from inside and outside academia rate the feasibility of managing recreational boats as vectors. We found consensus that an installation of a preventive boat vector management is considered an effective and urgent measure. This study advances our understanding of the potential of recreational boats to serve as vectors for invasive vertebrate species and demonstrates that preventive management of recreational boats is considered feasible by relevant decision makers inside and outside academia.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
NASA Astrophysics Data System (ADS)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Thermodynamics of weight loss diets.
Fine, Eugene J; Feinman, Richard D
2004-12-08
BACKGROUND: It is commonly held that "a calorie is a calorie", i.e. that diets of equal caloric content will result in identical weight change independent of macronutrient composition, and appeal is frequently made to the laws of thermodynamics. We have previously shown that thermodynamics does not support such a view and that diets of different macronutrient content may be expected to induce different changes in body mass. Low carbohydrate diets in particular have claimed a "metabolic advantage" meaning more weight loss than in isocaloric diets of higher carbohydrate content. In this review, for pedagogic clarity, we reframe the theoretical discussion to directly link thermodynamic inefficiency to weight change. The problem in outline: Is metabolic advantage theoretically possible? If so, what biochemical mechanisms might plausibly explain it? Finally, what experimental evidence exists to determine whether it does or does not occur? RESULTS: Reduced thermodynamic efficiency will result in increased weight loss. The laws of thermodynamics are silent on the existence of variable thermodynamic efficiency in metabolic processes. Therefore such variability is permitted and can be related to differences in weight lost. The existence of variable efficiency and metabolic advantage is therefore an empiric question rather than a theoretical one, confirmed by many experimental isocaloric studies, pending a properly performed meta-analysis. Mechanisms are as yet unknown, but plausible mechanisms at the metabolic level are proposed. CONCLUSIONS: Variable thermodynamic efficiency due to dietary manipulation is permitted by physical laws, is supported by much experimental data, and may be reasonably explained by plausible mechanisms.
Thermodynamics of weight loss diets
Fine, Eugene J; Feinman, Richard D
2004-01-01
Background It is commonly held that "a calorie is a calorie", i.e. that diets of equal caloric content will result in identical weight change independent of macronutrient composition, and appeal is frequently made to the laws of thermodynamics. We have previously shown that thermodynamics does not support such a view and that diets of different macronutrient content may be expected to induce different changes in body mass. Low carbohydrate diets in particular have claimed a "metabolic advantage" meaning more weight loss than in isocaloric diets of higher carbohydrate content. In this review, for pedagogic clarity, we reframe the theoretical discussion to directly link thermodynamic inefficiency to weight change. The problem in outline: Is metabolic advantage theoretically possible? If so, what biochemical mechanisms might plausibly explain it? Finally, what experimental evidence exists to determine whether it does or does not occur? Results Reduced thermodynamic efficiency will result in increased weight loss. The laws of thermodynamics are silent on the existence of variable thermodynamic efficiency in metabolic processes. Therefore such variability is permitted and can be related to differences in weight lost. The existence of variable efficiency and metabolic advantage is therefore an empiric question rather than a theoretical one, confirmed by many experimental isocaloric studies, pending a properly performed meta-analysis. Mechanisms are as yet unknown, but plausible mechanisms at the metabolic level are proposed. Conclusions Variable thermodynamic efficiency due to dietary manipulation is permitted by physical laws, is supported by much experimental data, and may be reasonably explained by plausible mechanisms. PMID:15588283
COLLABORATION ON NHEERL EPIDEMIOLOGY STUDIES
This task will continue ORD's efforts to develop a biologically plausible, quantitative health risk model for particulate matter (PM) based on epidemiological, toxicological, and mechanistic studies using matched exposure assessments. The NERL, in collaboration with the NHEERL, ...
Bays, Rebecca B; Zabrucky, Karen M; Gagne, Phill
2012-01-01
In the current study we examined whether prevalence information and imagery encoding influence participants' general plausibility, personal plausibility, belief, and memory ratings for suggested childhood events. Results showed decreases in general and personal plausibility ratings for low prevalence events when encoding instructions were not elaborate; however, instructions to repeatedly imagine suggested events elicited personal plausibility increases for low-prevalence events, evidence that elaborate imagery negated the effect of our prevalence manipulation. We found no evidence of imagination inflation or false memory construction. We discuss critical differences in researchers' manipulations of plausibility and imagery that may influence results of false memory studies in the literature. In future research investigators should focus on the specific nature of encoding instructions when examining the development of false memories.
The Prospects of Whole Brain Emulation within the next Half- Century
NASA Astrophysics Data System (ADS)
Eth, Daniel; Foust, Juan-Carlos; Whale, Brandon
2013-12-01
Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE's future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends
Interactive wall turbulence control
NASA Technical Reports Server (NTRS)
Wilkinson, Stephen P.
1990-01-01
After presenting boundary layer turbulence physics in a manner that emphasizes the possible modification of structural surfaces in a way that locally alters the production of turbulent flows, an account is given of the hardware that could plausibly be employed to implement such a turbulence-control scheme. The essential system components are flow sensors, electronic processors, and actuators; at present, actuator technology presents the greatest problems and limitations. High frequency/efficiency actuators are required to handle three-dimensional turbulent motions whose frequency and intensity increases in approximate proportion to freestream speed.
Massive Black Holes and the Laser Interferometer Space Antenna (LISA)
NASA Technical Reports Server (NTRS)
Blender, Peter L.; Hils, Dieter; Stebbins, Robin T.
1998-01-01
The goals of the USA mission include both astrophysical investigations and fundamental physics tests. The main astrophysical questions concern the space density, growth, mass function, and surroundings of massive black holes. Thus the crucial issue for the USA mission is the likelihood of observing signals from such sources. Four possible sources of this kind are discussed briefly in this paper. It appears plausible, or even likely. that one or more of these types of sources can be detected and studied by LISA.
Maldacena, Juan; Shenker, Stephen H.; Stanford, Douglas
2016-08-17
We conjecture a sharp bound on the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom. Chaos can be diagnosed using an out-of-time-order correlation function closely related to the commutator of operators separated in time. We conjecture that the influence of chaos on this correlator can develop no faster than exponentially, with Lyapunov exponent λ L ≤ 2πk B T/ℏ. We give a precise mathematical argument, based on plausible physical assumptions, establishing this conjecture.
ERIC Educational Resources Information Center
Conley, Sharon; You, Sukkyung
2014-01-01
A previous study examined role stress in relation to work outcomes; in this study, we added job structuring antecedents to a model of role stress and examined the moderating effects of locus of control. Structural equation modeling was used to assess the plausibility of our conceptual model, which specified hypothesized linkages among teachers'…
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; Golay, Philippe; McGill, Ryan J.; Canivez, Gary L.
2018-01-01
Bayesian structural equation modeling (BSEM) was used to investigate the latent structure of the Differential Ability Scales-Second Edition core battery using the standardization sample normative data for ages 7-17. Results revealed plausibility of a three-factor model, consistent with publisher theory, expressed as either a higher-order (HO) or a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
NASA Astrophysics Data System (ADS)
Antle, J. M.; Valdivia, R. O.; Jones, J.; Rosenzweig, C.; Ruane, A. C.
2013-12-01
This presentation provides an overview of the new methods developed by researchers in the Agricultural Model Inter-comparison and Improvement Project (AgMIP) for regional climate impact assessment and analysis of adaptation in agricultural systems. This approach represents a departure from approaches in the literature in several dimensions. First, the approach is based on the analysis of agricultural systems (not individual crops) and is inherently trans-disciplinary: it is based on a deep collaboration among a team of climate scientists, agricultural scientists and economists to design and implement regional integrated assessments of agricultural systems. Second, in contrast to previous approaches that have imposed future climate on models based on current socio-economic conditions, this approach combines bio-physical and economic models with a new type of pathway analysis (Representative Agricultural Pathways) to parameterize models consistent with a plausible future world in which climate change would be occurring. Third, adaptation packages for the agricultural systems in a region are designed by the research team with a level of detail that is useful to decision makers, such as research administrators and donors, who are making agricultural R&D investment decisions. The approach is illustrated with examples from AgMIP's projects currently being carried out in Africa and South Asia.
A deformable surface model for real-time water drop animation.
Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun
2012-08-01
A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.
Milk Intakes Are Not Associated with Percent Body Fat in Children from Ages 10 to 13 Years12
Noel, Sabrina E.; Ness, Andrew R.; Northstone, Kate; Emmett, Pauline; Newby, P. K.
2011-01-01
Epidemiologic studies report conflicting results for the relationship between milk intake and adiposity in children. We examined prospective and cross-sectional associations between milk intake and percent body fat among 2245 children from the Avon Longitudinal Study of Parents and Children. Cross-sectional analyses were performed at age 13 y between total, full-fat, and reduced-fat milk intake assessed using 3-d dietary records and body fat from DXA. Prospective analyses were conducted between milk intakes at age 10 y and body fat at 11 and 13 y. Models were adjusted for age, sex, height, physical activity, pubertal status, maternal BMI, maternal education, and intakes of total fat, sugar-sweetened beverages, 100% fruit juice, and ready-to-eat cereals; baseline BMI was added to prospective models. Subset analyses were performed for those with plausible dietary intakes. Mean milk consumption at 10 and 13 y was (mean ± SD) 0.90 ± 0.73 and 0.85 ± 0.78 servings/d [1 serving = 8 oz of milk (244 g of plain and 250 g flavored milk)], respectively. Cross-sectional results indicated an inverse association between full-fat milk intake and body fat [β = −0.47 (95% CI = −0.76, −0.19); P = 0.001]. Milk intake at age 10 y was inversely associated with body fat at 11 y [β = −0.16 g/d (95%CI = −0.28, −0.04); P = 0.01], but not among those with plausible dietary intakes, suggesting that this association was influenced by dietary measurement errors. Milk intake was not associated with body fat at age 13 y after adjustment. Although our prospective results corroborate other findings of a null associations between milk intake and adiposity, our inconsistent findings across analyses suggest further investigation is needed to clarify the relation, and accounting for dietary reporting errors is an important consideration. PMID:21940511
ERIC Educational Resources Information Center
Smangs, Mattias
2010-01-01
This article explores the plausibility of the conflicting theoretical assumptions underlying the main criminological perspectives on juvenile delinquents, their peer relations and social skills: the social ability model, represented by Sutherland's theory of differential associations, and the social disability model, represented by Hirschi's…
Exemplar-Based Clustering via Simulated Annealing
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2009-01-01
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of "exemplars" as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed…
ERIC Educational Resources Information Center
Paetkau, Mark
2007-01-01
One of my goals as an instructor is to teach students critical thinking skills. This paper presents an example of a student-led discussion of heat conduction at the first-year level. Heat loss from a human head is calculated using conduction and radiation models. The results of these plausible (but wrong) models of heat transfer contradict what…
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
NASA Astrophysics Data System (ADS)
Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.
2012-05-01
The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306
Multiple regimes of robust patterns between network structure and biodiversity
NASA Astrophysics Data System (ADS)
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-12-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities.
Multiple regimes of robust patterns between network structure and biodiversity
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-01-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities. PMID:26632996
Deep magmatism alters and erodes lithosphere and facilitates decoupling of Rwenzori crustal block
NASA Astrophysics Data System (ADS)
Wallner, Herbert; Schmeling, Harro
2013-04-01
The title is the answer to the initiating question "Why are the Rwenzori Mountains so high?" posed at the EGU 2008. Our motivation origins in the extreme topography of the Rwenzori Mountains. The strong, cold proterozoic crustal horst is situated between rift segments of the western branch of the East African Rift System. Ideas of rift induced delamination (RID) and melt induced weakening (MIW) have been tested with one- and two-phase flow physics. Numerical model parameter variations and new observations lead to a favoured model with simple and plausible definitions. Results coincide in the scope of their comparability with different observations or vice versa reduce ambiguity and uncertainties in model input. Principle laws of the thermo-mechanical physics are the equations of conservation of mass, momentum, energy and composition for a two-phase (matrix-melt) system with nonlinear rheology. A simple solid solution model determines melting and solidification under consideration of depletion and enrichment. The Finite Difference Method with markers is applied to visco-plastic flow using the streamfunction in an Eulerian formulation in 2D. The Compaction Boussinesq and the high Prandtl number Approximation are employed. Lateral kinematic boundary conditions provide long-wavelength asthenospheric upwelling and extensional stress conditions. Partial melts are generated in the asthenosphere, extracted above a critical fraction, and emplaced into a given intrusion level. Temperature anomalies positioned beneath the future rifts, the sole specialization to the Rwenzori situation, localize melts which are very effective in weakening the lithosphere. Convection patterns tend to generate dripping instabilities at the lithospheric base; multiple slabs detach and distort uprising asthenosphere; plumes migrate, join and split. In spite of appearing chaotic flow behaviour a characteristic recurrence time of high velocity events (drips, plumes) emerges. Chimneys of increased enrichment develop above the anomalies and evolve to narrow low viscous mechanical decoupling zones. Deep rooting dynamic forces then affect the surface, showing a vigorous topography. A geodynamic model, linking magmatism. mantle dynamics and lithospheric extension, qualitatively explains most of observed phenomena. Depending on physical model parameters we cover the whole spectrum from dripping lithospheric base instabilities to the full break off of the mantle lithosphere block below the Rwenzoris.
Fluorescent Fe K Emission from High Density Accretion Disks
NASA Astrophysics Data System (ADS)
Bautista, Manuel; Mendoza, Claudio; Garcia, Javier; Kallman, Timothy R.; Palmeri, Patrick; Deprince, Jerome; Quinet, Pascal
2018-06-01
Iron K-shell lines emitted by gas closely orbiting black holes are observed to be grossly broadened and skewed by Doppler effects and gravitational redshift. Accordingly, models for line profiles are widely used to measure the spin (i.e., the angular momentum) of astrophysical black holes. The accuracy of these spin estimates is called into question because fitting the data requires very high iron abundances, several times the solar value. Meanwhile, no plausible physical explanation has been proffered for why these black hole systems should be so iron rich. The most likely explanation for the super-solar iron abundances is a deficiency in the models, and the leading candidate cause is that current models are inapplicable at densities above 1018 cm-3. We study the effects of high densities on the atomic parameters and on the spectral models for iron ions. At high densities, Debye plasma can affect the effective atomic potential of the ions, leading to observable changes in energy levels and atomic rates with respect to the low density case. High densities also have the effec of lowering energy the atomic continuum and reducing the recombination rate coefficients. On the spectral modeling side, high densities drive level populations toward a Boltzman distribution and very large numbers of excited atomic levels, typically accounted for in theoretical spectral models, may contribute to the K-shell spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathyanarayana Rao, Mayuri; Subrahmanyan, Ravi; Shankar, N Udaya
Cosmic baryon evolution during the Cosmic Dawn and Reionization results in redshifted 21-cm spectral distortions in the cosmic microwave background (CMB). These encode information about the nature and timing of first sources over redshifts 30–6 and appear at meter wavelengths as a tiny CMB distortion along with the Galactic and extragalactic radio sky, which is orders of magnitude brighter. Therefore, detection requires precise methods to model foregrounds. We present a method of foreground fitting using maximally smooth (MS) functions. We demonstrate the usefulness of MS functions over traditionally used polynomials to separate foregrounds from the Epoch of Reionization (EoR) signal.more » We also examine the level of spectral complexity in plausible foregrounds using GMOSS, a physically motivated model of the radio sky, and find that they are indeed smooth and can be modeled by MS functions to levels sufficient to discern the vanilla model of the EoR signal. We show that MS functions are loss resistant and robustly preserve EoR signal strength and turning points in the residuals. Finally, we demonstrate that in using a well-calibrated spectral radiometer and modeling foregrounds with MS functions, the global EoR signal can be detected with a Bayesian approach with 90% confidence in 10 minutes’ integration.« less
Modelling biochemical reaction systems by stochastic differential equations with reflection.
Niu, Yuanling; Burrage, Kevin; Chen, Luonan
2016-05-07
In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley
2014-07-01
Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.
Multipole models of four-image gravitational lenses with anomalous flux ratios
NASA Astrophysics Data System (ADS)
Congdon, Arthur B.; Keeton, Charles R.
2005-12-01
It has been known for over a decade that many four-image gravitational lenses exhibit anomalous radio flux ratios. These anomalies can be explained by adding a clumpy cold dark matter (CDM) component to the background galactic potential of the lens. As an alternative, Evans & Witt (2003) recently suggested that smooth multipole perturbations provide a reasonable alternative to CDM substructure in some but not all cases. We generalize their method in two ways so as to determine whether multipole models can explain highly anomalous systems. We carry the multipole expansion to higher order, and also include external tidal shear as a free parameter. Fitting for the shear proves crucial to finding a physical (positive-definite density) model. For B1422+231, working to order kmax= 5 (and including shear) yields a model that is physical but implausible. Going to higher order (kmax>~ 9) reduces global departures from ellipticity, but at the cost of introducing small-scale wiggles in proximity to the bright images. These localized undulations are more pronounced in B2045+265, where kmax~ 17 multipoles are required to smooth out large-scale deviations from elliptical symmetry. Such modes surely cannot be taken at face value; they must indicate that the models are trying to reproduce some other sort of structure. Our formalism naturally finds models that fit the data exactly, but we use B0712+472 to show that measurement uncertainties have little effect on our results. Finally, we consider the system B1933+503, where two sources are lensed by the same foreground galaxy. The additional constraints provided by the images of the second source render the multipole model unphysical. We conclude that external shear must be taken into account to obtain plausible models, and that a purely smooth angular structure for the lens galaxy does not provide a viable alternative to the prevailing CDM clump hypothesis.
Orr, Mark G; Kaplan, George A; Galea, Sandro
2016-09-01
Multiple approaches that can contribute to reducing obesity have been proposed. These policies may share overlapping pathways, and may have unanticipated consequences, creating considerable complexity. Aiming to illuminate the use of agent-based models to explore the consequences of key policies, this paper simulates the effects of increasing neighbourhood availability of good food stores, physical activity infrastructure and higher school quality on the reduction of black/white disparities in body mass index (BMI) in the USA. We used an agent-based model, with parameters derived from the empirical literature, which included individual and neighbourhood characteristics over the life course as determinants of behaviours thought to impact BMI. We systematically varied the strength of the 3 policy interventions, examining the impact of 125 different policy scenarios on black/white BMI disparities. In the absence of any of these policies, black/white BMI disparities generally increased over time. However, we found that some combinations of these policies resulted in reductions in BMI, yielding decreases in the black/white BMI disparity as large as a 90%. Within the structure of relationships captured in this simulation model, there is support for the further use of agent-based simulation models to explore upstream policies as plausible candidates for the reduction of black/white disparities in BMI. These results highlight the potential insights into important public health problems, such as obesity, that can come from uniting the systems science approach with policy analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
VISCOELASTIC MODELS OF TIDALLY HEATED EXOMOONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, Vera; Turner, Edwin L., E-mail: dobos@konkoly.hu
2015-05-01
Tidal heating of exomoons may play a key role in their habitability, since the elevated temperature can melt the ice on the body even without significant solar radiation. The possibility of life has been intensely studied on solar system moons such as Europa or Enceladus where the surface ice layer covers a tidally heated water ocean. Tidal forces may be even stronger in extrasolar systems, depending on the properties of the moon and its orbit. To study the tidally heated surface temperature of exomoons, we used a viscoelastic model for the first time. This model is more realistic than themore » widely used, so-called fixed Q models because it takes into account the temperature dependence of the tidal heat flux and the melting of the inner material. Using this model, we introduced the circumplanetary Tidal Temperate Zone (TTZ), which strongly depends on the orbital period of the moon and less on its radius. We compared the results with the fixed Q model and investigated the statistical volume of the TTZ using both models. We have found that the viscoelastic model predicts 2.8 times more exomoons in the TTZ with orbital periods between 0.1 and 3.5 days than the fixed Q model for plausible distributions of physical and orbital parameters. The viscoelastic model provides more promising results in terms of habitability because the inner melting of the body moderates the surface temperature, acting like a thermostat.« less
Making Stargates: The Physics of Traversable Absurdly Benign Wormholes
NASA Astrophysics Data System (ADS)
Woodward, J. F.
Extremely short throat "absurdly benign" wormholes enabling near instantaneous travel to arbitrarily remote locations in both space and time - stargates - have long been a staple of science fiction. The physical requirements for the production of such devices were worked out by Morris and Thorne in 1988. They approached the issue of rapid spacetime transport by asking the question: what constraints do the laws of physics as we know them place on an "arbitrarily advanced culture" (AAC)? Their answer - a Jupiter mass of negative restmass matter in a structure a few tens of meters in size - seems to have rendered such things beyond the realm of the believably achievable. This might be taken as justification for abandoning further serious exploration of the physics of stargates. If such an investigation is pursued, however, one way to do so is to invert Morris and Thorne's question and ask: if "arbitrarily advanced aliens" (AAAs) have actually made stargates, what must be true of the laws of physics for them to have done so? Elementary arithmetic reveals that stargates would have an "exotic" density of on the order of 1022 gm/cm3, that is, orders of magnitude higher than nuclear density. Not only does one have to achieve this stupendous density of negative mass matter, it must be done, presumably, only with the application of "low" energy electromagnetic fields. We examine this problem, finding that a plausible solution does not depend on the laws of quantum gravity, as some have proposed. Rather, the solution depends on understanding the nature of electrons in terms of a semi-classical extension of the exact, general relativistic electron model of Arnowitt, Deser, and Misner (ADM), and Mach's Principle.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
Against Many-Worlds Interpretations
NASA Astrophysics Data System (ADS)
Kent, Adrian
This is a critical review of the literature on many-worlds interpretations, MWI, with arguments drawn partly from earlier critiques by Bell and Stein. The essential postulates involved in various MWI are extracted, and their consistency with the evident physical world is examined. Arguments are presented against MWI proposed by Everett, Graham and DeWitt. The relevance of frequency operators to MWI is examined; it is argued that frequency operator theorems of Hartle and Farhi-Goldstone-Gutmann do not in themselves provide a probability interpretation for quantum mechanics, and thus neither support existing MWI nor would be useful in constructing new MWI. Comments are made on papers by Geroch and Deutsch that advocate MWI. It is concluded that no plausible set of axioms exists for an MWI that describes known physics.
NASA Astrophysics Data System (ADS)
Klee, Robert
2017-10-01
Thomas Nagel in `The Absurd' (Nagel 1971) mentions the future expunction of the human species as a `metaphor' for our ability to see our lives from the outside, which he claims is one source of our sense of life's absurdity. I argue that the future expunction (not to be confused with extinction) of everything human - indeed of everything biological in a terran sense - is not a mere metaphor but a physical certainty under the laws of nature. The causal processes by which human expunction will take place are presented in some empirical detail, so that philosophers cannot dismiss it as merely speculative. I also argue that appeals to anthropic principles or to forms of mystical cosmology are of no plausible avail in the face of human expunction under the laws of physics.
Physical Mechanisms of Rapid Lake Warming
NASA Astrophysics Data System (ADS)
Lenters, J. D.
2016-12-01
Recent studies have shown significant warming of inland water bodies around the world. Many lakes are warming more rapidly than the ambient surface air temperature, and this is counter to what is often expected based on the lake surface energy balance. A host of reasons have been proposed to explain these discrepancies, including changes in the onset of summer stratification, significant loss of ice cover, and concomitant changes in winter air temperature and/or summer cloud cover. A review of the literature suggests that no single physical mechanism is primarily responsible for the majority of these changes, but rather that the large heterogeneity in regional climate trends and lake geomorphometry results in a host of potential physical drivers. In this study, we discuss the variety of mechanisms that have been proposed to explain rapid lake warming and offer an assessment of the physical plausibility for each potential contributor. Lake Superior is presented as a case study to illustrate the "perfect storm" of factors that can cause a deep, dimictic lake to warm at rate that exceeds the rate of global air temperature warming by nearly an order of magnitude. In particular, we use a simple mixed-layer model to show that spatially variable trends in Lake Superior surface water temperature are determined, to first order, by variations in bathymetry and winter air temperature. Summer atmospheric conditions are often of less significance, and winter ice cover may simply be a correlate. The results highlight the importance of considering the full range of factors that can lead to trends in lake surface temperature, and that conventional wisdom may often not be the best guide.
Prospects for colliders and collider physics to the 1 PeV energy scale
NASA Astrophysics Data System (ADS)
King, Bruce J.
2000-08-01
A review is given of the prospects for future colliders and collider physics at the energy frontier. A proof-of-plausibility scenario is presented for maximizing our progress in elementary particle physics by extending the energy reach of hadron and lepton colliders as quickly and economically as might be technically and financially feasible. The scenario comprises 5 colliders beyond the LHC—one each of e+e- and hadron colliders and three μ+μ- colliders — and is able to hold to the historical rate of progress in the log-energy reach of hadron and lepton colliders, reaching the 1 PeV constituent mass scale by the early 2040's. The technical and fiscal requirements for the feasibility of the scenario are assessed and relevant long-term R&D projects are identified. Considerations of both cost and logistics seem to strongly favor housing most or all of the colliders in the scenario in a new world high energy physics laboratory.
Fessler, Daniel M T; Holbrook, Colin
2013-05-01
In situations of potential violent conflict, deciding whether to fight, flee, or try to negotiate entails assessing many attributes contributing to the relative formidability of oneself and one's opponent. Summary representations can usefully facilitate such assessments of multiple factors. Because physical size and strength are both phylogenetically ancient and ontogenetically recurrent contributors to the outcome of violent conflicts, these attributes provide plausible conceptual dimensions that may be used by the mind to summarize the relative formidability of opposing parties. Because the presence of allies is a vital factor in determining victory, we hypothesized that men accompanied by male companions would therefore envision a solitary foe as physically smaller and less muscular than would men who were alone. We document the predicted effect in two studies, one using naturally occurring variation in the presence of male companions and one employing experimental manipulation of this factor.
Hawking temperature: an elementary approach based on Newtonian mechanics and quantum theory
NASA Astrophysics Data System (ADS)
Pinochet, Jorge
2016-01-01
In 1974, the British physicist Stephen Hawking discovered that black holes have a characteristic temperature and are therefore capable of emitting radiation. Given the scientific importance of this discovery, there is a profuse literature on the subject. Nevertheless, the available literature ends up being either too simple, which does not convey the true physical significance of the issue, or too technical, which excludes an ample segment of the audience interested in science, such as physics teachers and their students. The present article seeks to remedy this shortcoming. It develops a simple and plausible argument that provides insight into the fundamental aspects of Hawking’s discovery, which leads to an approximate equation for the so-called Hawking temperature. The exposition is mainly intended for physics teachers and their students, and it only requires elementary algebra, as well as basic notions of Newtonian mechanics and quantum theory.
NASA Astrophysics Data System (ADS)
Ammann, C. M.; Holland, M. M.
2016-12-01
The Arctic is undergoing an exceptionally rapid transformation. Trying to predict or project the consequences of this change is pushing nearly every discipline in the physical, biogeochemical and social sciences towards the limits of their current understanding. Adequate data is missing to test and validate models for capturing a state of the Arctic system that we have not observed. But even more challenging is the systems-level evaluation, where impacts can quickly lead to unexpected outcomes with cascading repercussions throughout the different components and subcomponents of the environment. One approach to test our understanding, and to expose gaps in current observation strategies, modeling approaches as well as planning tools (e.g., forecast workflows, or decision frameworks) is to carefully design a small number of coordinated scenarios of plausible future states of the system, and then to study their diverse, potential impacts. A coordination of the scenarios is essential so that all disciplinary perspectives can be arranged around a common state, assumptions can be aligned, and a transdisciplinary conversation can be advanced from a common platform to form a comprehensive assessment of our knowledge. This presentation is a call to the community to join and assist the SEARCH program in designing effective scenarios that can be used for cross-cutting investigation of current limitations in our scientific understanding of how the Arctic environment might change, and what consequences these changes might bring to the physical, biological and social environments.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
ERIC Educational Resources Information Center
Gunnoe, Marjorie Lindner; Mariner, Carrie Lea
Researchers who employ contextual models of parenting contend that it is not spanking per se, but rather the context in which spanking occurs and the meanings children ascribe to spanking, that predict child outcomes. This study proposed two plausible meanings that children may ascribe to spanking--a legitimate expression of parental authority or…
ERIC Educational Resources Information Center
Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick
2008-01-01
The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…
ERIC Educational Resources Information Center
Mavritsaki, Eirini; Heinke, Dietmar; Allen, Harriet; Deco, Gustavo; Humphreys, Glyn W.
2011-01-01
We present the case for a role of biologically plausible neural network modeling in bridging the gap between physiology and behavior. We argue that spiking-level networks can allow "vertical" translation between physiological properties of neural systems and emergent "whole-system" performance--enabling psychological results to be simulated from…
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.
2012-04-01
A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.
Plausibility and the Theoreticians' Regress: Constructing the evolutionary fate of stars
NASA Astrophysics Data System (ADS)
Ipe, Alex Ike
2002-10-01
This project presents a case-study of a scientific controversy that occurred in theoretical astrophysics nearly seventy years ago following the conceptual discovery of a novel phenomenon relating to the evolution and structure of stellar matter, known as the limiting mass. The ensuing debate between the author of the finding, Subrahmanyan Chandrasekhar and his primary critic, Arthur Stanley Eddington, witnessed both scientists trying to convince one another, as well as the astrophysical community, that their respective positions on the issue was the correct one. Since there was no independent criterion—that is, no observational evidence—at the time of the dispute that could have been drawn upon to test the validity of the limiting mass concept, a logical, objective resolution to the controversy was not possible. In this respect, I argue that the dynamics of the Chandrasekhar-Eddington debate succinctly resonates with Kennefick's notion of the Theoreticians' Regress. However, whereas this model predicts that such a regress can be broken if both parties in a dispute come to agree on who was in error and collaborate on a calculation whose technical foundation can be agreed to, I argue that a more pragmatic path by which the Theoreticians' Regress is broken is when one side in a dispute is able to construct its argument as being more plausible than that of its opponent, and is so successful in doing so, that its opposition is subsequently forced to withdraw from the debate. In order to adequately deal with the construction of plausibility in the context of scientific controversies, I draw upon Harvey's Plausibility Model as well as Pickering's work on the role socio-cultural factors play in the resolution of intellectual disputes. It is believed that the ideas embedded in these social- relativist-constructivist perspectives provide the most parsimonious explanation as to the reasons for the genesis and ultimate closure of this particular scientific controversy.
NASA Astrophysics Data System (ADS)
Perez, J. C.; Chandran, B. D. G.
2017-12-01
In this work we present recent results from high-resolution direct numerical simulations and a phenomenological model that describes the radial evolution of reflection-driven Alfven Wave turbulence in the solar atmosphere and the inner solar wind. The simulations are performed inside a narrow magnetic flux tube that models a coronal hole extending from the solar surface through the chromosphere and into the solar corona to approximately 21 solar radii. The simulations include prescribed empirical profiles that account for the inhomogeneities in density, background flow, and the background magnetic field present in coronal holes. Alfven waves are injected into the solar corona by imposing random, time-dependent velocity and magnetic field fluctuations at the photosphere. The phenomenological model incorporates three important features observed in the simulations: dynamic alignment, weak/strong nonlinear AW-AW interactions, and that the outward-propagating AWs launched by the Sun split into two populations with different characteristic frequencies. Model and simulations are in good agreement and show that when the key physical parameters are chosen within observational constraints, reflection-driven Alfven turbulence is a plausible mechanism for the heating and acceleration of the fast solar wind. By flying a virtual Parker Solar Probe (PSP) through the simulations, we will also establish comparisons between the model and simulations with the kind of single-point measurements that PSP will provide.
A minimalist feedback-regulated model for galaxy formation during the epoch of reionization
NASA Astrophysics Data System (ADS)
Furlanetto, Steven R.; Mirocha, Jordan; Mebane, Richard H.; Sun, Guochao
2017-12-01
Near-infrared surveys have now determined the luminosity functions of galaxies at 6 ≲ z ≲ 8 to impressive precision and identified a number of candidates at even earlier times. Here, we develop a simple analytic model to describe these populations that allows physically motivated extrapolation to earlier times and fainter luminosities. We assume that galaxies grow through accretion on to dark matter haloes, which we model by matching haloes at fixed number density across redshift, and that stellar feedback limits the star formation rate. We allow for a variety of feedback mechanisms, including regulation through supernova energy and momentum from radiation pressure. We show that reasonable choices for the feedback parameters can fit the available galaxy data, which in turn substantially limits the range of plausible extrapolations of the luminosity function to earlier times and fainter luminosities: for example, the global star formation rate declines rapidly (by a factor of ∼20 from z = 6 to 15 in our fiducial model), but the bright galaxies accessible to observations decline even faster (by a factor ≳ 400 over the same range). Our framework helps us develop intuition for the range of expectations permitted by simple models of high-z galaxies that build on our understanding of 'normal' galaxy evolution. We also provide predictions for galaxy measurements by future facilities, including James Webb Space Telescope and Wide-Field Infrared Survey Telescope.
Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.
Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N
2014-02-01
People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.
B- and A-Type Stars in the Taurus-Auriga Star-Forming Region
NASA Technical Reports Server (NTRS)
Mooley, Kunal; Hillenbrand, Lynne; Rebull, Luisa; Padgett, Deborah; Knapp, Gillian
2013-01-01
We describe the results of a search for early-type stars associated with the Taurus-Auriga molecular cloud complex, a diffuse nearby star-forming region noted as lacking young stars of intermediate and high mass. We investigate several sets of possible O, B, and early A spectral class members. The first is a group of stars for which mid-infrared images show bright nebulae, all of which can be associated with stars of spectral-type B. The second group consists of early-type stars compiled from (1) literature listings in SIMBAD, (2) B stars with infrared excesses selected from the Spitzer Space Telescope survey of the Taurus cloud, (3) magnitude- and color-selected point sources from the Two Micron All Sky Survey, and (4) spectroscopically identified early-type stars from the Sloan Digital Sky Survey coverage of the Taurus region. We evaluated stars for membership in the Taurus-Auriga star formation region based on criteria involving: spectroscopic and parallactic distances, proper motions and radial velocities, and infrared excesses or line emission indicative of stellar youth. For selected objects, we also model the scattered and emitted radiation from reflection nebulosity and compare the results with the observed spectral energy distributions to further test the plausibility of physical association of the B stars with the Taurus cloud. This investigation newly identifies as probable Taurus members three B-type stars: HR 1445 (HD 28929), t Tau (HD 29763), 72 Tau (HD 28149), and two A-type stars: HD 31305 and HD 26212, thus doubling the number of stars A5 or earlier associated with the Taurus clouds. Several additional early-type sources including HD 29659 and HD 283815 meet some, but not all, of the membership criteria and therefore are plausible, though not secure, members.
Waythomas, C.F.
2001-01-01
The formation of lahars and a debris avalanche during Holocene eruptions of the Spurr volcanic complex in south-central Alaska have led to the development of volcanic debris dams in the Chakachatna River valley. Debris dams composed of lahar and debris-avalanche deposits formed at least five times in the last 8000-10,000 years and most recently during eruptions of Crater Peak vent in 1953 and 1992. Water impounded by a large debris avalanche of early Holocene (?) age may have destabilized an upstream glacier-dammed lake causing a catastrophic flood on the Chakachatna River. A large alluvial fan just downstream of the debris-avalanche deposit is strewn with boulders and blocks and is probably the deposit generated by this flood. Application of a physically based dam-break model yields estimates of peak discharge (Qp) attained during failure of the debris-avalanche dam in the range 104 < Qp < 106 m3 s-1 for plausible breach erosion rates of 10-100 m h-1. Smaller, short-lived, lahar dams that formed during historical eruptions in 1953, and 1992, impounded smaller lakes in the upper Chakachatna River valley and peak flows attained during failure of these volcanic debris dams were in the range 103 < Qp < 104 m3 s-1 for plausible breach erosion rates. Volcanic debris dams have formed at other volcanoes in the Cook Inlet region, Aleutian arc, and Wrangell Mountains but apparently did not fail rapidly or result in large or catastrophic outflows. Steep valley topography and frequent eruptions at volcanoes in this region make for significant hazards associated with the formation and failure of volcanic debris dams. Published by Elsevier Science B.V.
Nojavan A, Farnaz; Qian, Song S; Paerl, Hans W; Reckhow, Kenneth H; Albright, Elizabeth A
2014-06-15
The present paper utilizes a Bayesian Belief Network (BBN) approach to intuitively present and quantify our current understanding of the complex physical, chemical, and biological processes that lead to eutrophication in an estuarine ecosystem (New River Estuary, North Carolina, USA). The model is further used to explore the effects of plausible future climatic and nutrient pollution management scenarios on water quality indicators. The BBN, through visualizing the structure of the network, facilitates knowledge communication with managers/stakeholders who might not be experts in the underlying scientific disciplines. Moreover, the developed structure of the BBN is transferable to other comparable estuaries. The BBN nodes are discretized exploring a new approach called moment matching method. The conditional probability tables of the variables are driven by a large dataset (four years). Our results show interaction among various predictors and their impact on water quality indicators. The synergistic effects caution future management actions. Copyright © 2014 Elsevier Ltd. All rights reserved.
A machine learning approach to computer-aided molecular design
NASA Astrophysics Data System (ADS)
Bolis, Giorgio; Di Pace, Luigi; Fabrocini, Filippo
1991-12-01
Preliminary results of a machine learning application concerning computer-aided molecular design applied to drug discovery are presented. The artificial intelligence techniques of machine learning use a sample of active and inactive compounds, which is viewed as a set of positive and negative examples, to allow the induction of a molecular model characterizing the interaction between the compounds and a target molecule. The algorithm is based on a twofold phase. In the first one — the specialization step — the program identifies a number of active/inactive pairs of compounds which appear to be the most useful in order to make the learning process as effective as possible and generates a dictionary of molecular fragments, deemed to be responsible for the activity of the compounds. In the second phase — the generalization step — the fragments thus generated are combined and generalized in order to select the most plausible hypothesis with respect to the sample of compounds. A knowledge base concerning physical and chemical properties is utilized during the inductive process.
The effect of suspended particles on Jean's criterion for gravitational instability
NASA Technical Reports Server (NTRS)
Wollkind, David J.; Yates, Kemble R.
1990-01-01
The effect that the proper inclusion of suspended particles has on Jeans' criterion for the self-gravitational instability of an unbounded nonrotating adiabatic gas cloud is examined by formulating the appropriate model system, introducing particular physically plausible equations of state and constitutive relations, performing a linear stability analysis of a uniformly expanding exact solution to these governing equations, and exploiting the fact that there exists a natural small material parameter for this problem given by N sub 1/n sub 1, the ratio of the initial number density for the particles to that for the gas. The main result of this investigation is the derivation of an altered criterion which can substantially reduce Jeans' original critical wavelength for instability. It is then shown that the existing discrepancy between Jeans' theoretical prediction using and actual observational data relevant to the Andromeda nebula M31 can be accounted for by this new criterion of assuming suspended particles of a reasonable grain size and distribution to be present.
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
NASA Astrophysics Data System (ADS)
Robinson, Alexandra R.
An updated global survey of radioisotope production and distribution was completed and subjected to a revised "down-selection methodology" to determine those radioisotopes that should be classified as potential national security risks based on availability and key physical characteristics that could be exploited in a hypothetical radiological dispersion device. The potential at-risk radioisotopes then were used in a modeling software suite known as Turbo FRMAC, developed by Sandia National Laboratories, to characterize plausible contamination maps known as Protective Action Guideline Zone Maps. This software also was used to calculate the whole body dose equivalent for exposed individuals based on various dispersion parameters and scenarios. Derived Response Levels then were determined for each radioisotope using: 1) target doses to members of the public provided by the U.S. EPA, and 2) occupational dose limits provided by the U.S. Nuclear Regulatory Commission. The limiting Derived Response Level for each radioisotope also was determined.
Potential sea-level rise from Antarctic ice-sheet instability constrained by observations.
Ritz, Catherine; Edwards, Tamsin L; Durand, Gaël; Payne, Antony J; Peyaud, Vincent; Hindmarsh, Richard C A
2015-12-03
Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers.
Europa's Crust and Ocean: Origin, Composition, and the Prospects for Life
Kargel, J.S.; Kaye, J.Z.; Head, J. W.; Marion, G.M.; Sassen, R.; Crowley, J.K.; Ballesteros, O.P.; Grant, S.A.; Hogenboom, D.L.
2000-01-01
We have considered a wide array of scenarios for Europa's chemical evolution in an attempt to explain the presence of ice and hydrated materials on its surface and to understand the physical and chemical nature of any ocean that may lie below. We postulate that, following formation of the jovian system, the europan evolutionary sequence has as its major links: (a) initial carbonaceous chondrite rock, (b) global primordial aqueous differentiation and formation of an impure primordial hydrous crust, (c) brine evolution and intracrustal differentiation, (d) degassing of Europa's mantle and gas venting, (e) hydrothermal processes, and (f) chemical surface alteration. Our models were developed in the context of constraints provided by Galileo imaging, near infrared reflectance spectroscopy, and gravity and magnetometer data. Low-temperature aqueous differentiation from a carbonaceous CI or CM chondrite precursor, without further chemical processing, would result in a crust/ocean enriched in magnesium sulfate and sodium sulfate, consistent with Galileo spectroscopy. Within the bounds of this simple model, a wide range of possible layered structures may result; the final state depends on the details of intracrustal differentiation. Devolatilization of the rocky mantle and hydrothermal brine reactions could have produced very different ocean/crust compositions, e.g., an ocean/crust of sodium carbonate or sulfuric acid, or a crust containing abundant clathrate hydrates. Realistic chemical-physical evolution scenarios differ greatly in detailed predictions, but they generally call for a highly impure and chemically layered crust. Some of these models could lead also to lateral chemical heterogeneities by diapiric upwellings and/or cryovolcanism. We describe some plausible geological consequences of the physical-chemical structures predicted from these scenarios. These predicted consequences and observed aspects of Europa's geology may serve as a basis for further analys is and discrimination among several alternative scenarios. Most chemical pathways could support viable ecosystems based on analogy with the metabolic and physiological versatility of terrestrial microorganisms. ?? 2000 Academic Press.
Potential sea-level rise from Antarctic ice-sheet instability constrained by observations
NASA Astrophysics Data System (ADS)
Ritz, Catherine; Edwards, Tamsin L.; Durand, Gaël; Payne, Antony J.; Peyaud, Vincent; Hindmarsh, Richard C. A.
2015-12-01
Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers.
Making the universe safe for historians: Time travel and the laws of physics
NASA Astrophysics Data System (ADS)
Woodward, James F.
1995-02-01
The study of the hypothetical activities of arbitrarily advanced cultures, particularly in the area of space and time travel, as a means of investigating fundamental issues in physics is briefly discussed. Hawking's chronology protection conjecture as it applies to wormhole spacetimes is considered. The nature of time, especially regarding the viability of time travel, as it appears in several “interpretations” of quantum mechanics is investigated. A conjecture on the plausibility of theories of reality that admit relativistically invariant interactions and irreducibly stochastic processes is advanced. A transient inertial reaction effect that makes it technically feasible, fleetingly, to induce large concentrations of negative mass-energy is presented and discussed in the context of macroscopic wormhole formation. Other candidates for chronology protection are examined. It is pointed out that if the strong version of Mach's principle (the gravitational induction of mass) is correct, then wormhole formation employing negative mass-energy is impossible. But if the bare masses of elementary particles are large, finite and negative, as is suggested by a heuristic general relativistic model of elementary particles, then, using the transient effect, it is technically feasible to trigger a non-linear process that may lead to macroscopic wormhole formation. Such wormholes need not be destroyed by the Hawking protection mechanism.
Quantum mechanical wavefunction: visualization at undergraduate level
NASA Astrophysics Data System (ADS)
Chhabra, Mahima; Das, Ritwick
2017-01-01
Quantum mechanics (QM) forms the most crucial ingredient of modern-era physical science curricula at undergraduate level. The abstract ideas involved in QM related concepts pose a challenge towards appropriate visualization as a consequence of their counter-intuitive nature and lack of experiment-assisted visualization tools. At the heart of the quantum mechanical formulation lies the concept of ‘wavefunction’, which forms the basis for understanding the behavior of physical systems. At undergraduate level, the concept of ‘wavefunction’ is introduced in an abstract framework using mathematical tools and therefore opens up an enormous scope for alternative conceptions and erroneous visualization. The present work is an attempt towards exploring the visualization models constructed by undergraduate students for appreciating the concept of ‘wavefunction’. We present a qualitative analysis of the data obtained from administering a questionnaire containing four visualization based questions on the topic of ‘wavefunction’ to a group of ten undergraduate-level students at an institute in India which excels in teaching and research of basic sciences. Based on the written responses, all ten students were interviewed in detail to unravel the exact areas of difficulty in visualization of ‘wavefunction’. The outcome of present study not only reveals the gray areas in students’ conceptualization, but also provides a plausible route to address the issues at the pedagogical level within the classroom.
Generalized gas-solid adsorption modeling: Single-component equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas; ...
2015-01-07
Over the last several decades, modeling of gas–solid adsorption at equilibrium has generally been accomplished through the use of isotherms such as the Freundlich, Langmuir, Tóth, and other similar models. While these models are relatively easy to adapt for describing experimental data, their simplicity limits their generality to be used with many different sets of data. This limitation forces engineers and scientists to test each different model in order to evaluate which one can best describe their data. Additionally, the parameters of these models all have a different physical interpretation, which may have an effect on how they can bemore » further extended into kinetic, thermodynamic, and/or mass transfer models for engineering applications. Therefore, it is paramount to adopt not only a more general isotherm model, but also a concise methodology to reliably optimize for and obtain the parameters of that model. A model of particular interest is the Generalized Statistical Thermodynamic Adsorption (GSTA) isotherm. The GSTA isotherm has enormous flexibility, which could potentially be used to describe a variety of different adsorption systems, but utilizing this model can be fairly difficult due to that flexibility. To circumvent this complication, a comprehensive methodology and computer code has been developed that can perform a full equilibrium analysis of adsorption data for any gas-solid system using the GSTA model. The code has been developed in C/C++ and utilizes a Levenberg–Marquardt’s algorithm to handle the non-linear optimization of the model parameters. Since the GSTA model has an adjustable number of parameters, the code iteratively goes through all number of plausible parameters for each data set and then returns the best solution based on a set of scrutiny criteria. Data sets at different temperatures are analyzed serially and then linear correlations with temperature are made for the parameters of the model. The end result is a full set of optimal GSTA parameters, both dimensional and non-dimensional, as well as the corresponding thermodynamic parameters necessary to predict the behavior of the system at temperatures for which data were not available. It will be shown that this code, utilizing the GSTA model, was able to describe a wide variety of gas-solid adsorption systems at equilibrium.In addition, a physical interpretation of these results will be provided, as well as an alternate derivation of the GSTA model, which intends to reaffirm the physical meaning.« less
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Eocene Paleoclimate: Incredible or Uncredible? Model data syntheses raise questions.
NASA Astrophysics Data System (ADS)
Huber, M.
2012-04-01
Reconstructions of Eocene paleoclimate have pushed on the boundaries of climate dynamics theory for generations. While significant improvements in theory and models have brought them closer to the proxy data, the data themselves have shifted considerably. Tropical temperatures and greenhouse gas concentrations are now reconstructed to be higher than once thought--in agreement with models--but, many polar temperature reconstructions are even warmer than the eye popping numbers from only a decade ago. These interpretations of subtropical-to-tropical polar conditions once again challenge models and theory. But, the devil, is as always in the details and it is worthwhile to consider the range of potential uncertainties and biases in the paleoclimate record interpretations to evaluate the proposition that models and data may not materially disagree. It is necessary to ask whether current Eocene paleoclimate reconstructions are accurate enough to compellingly argue for a complete failure of climate models and theory. Careful consideration of Eocene model output and proxy data reveals that over most of the Earth the model agrees with the upper range of plausible tropical proxy data and the lower range of plausible high latitude proxy reconstructions. Implications for the sensitivity of global climate to greenhouse gas forcing are drawn for a range of potential Eocene climate scenarios ranging from a literal interpretation of one particular model to a literal interpretation of proxy data. Hope for a middle ground is found.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to
A One-System Theory Which is Not Propositional.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2009-04-01
We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.
Cure models for the analysis of time-to-event data in cancer studies.
Jia, Xiaoyu; Sima, Camelia S; Brennan, Murray F; Panageas, Katherine S
2013-11-01
In settings when it is biologically plausible that some patients are cured after definitive treatment, cure models present an alternative to conventional survival analysis. Cure models can inform on the group of patients cured, by estimating the probability of cure, and identifying factors that influence it; while simultaneously focusing on time to recurrence and associated factors for the remaining patients. © 2013 Wiley Periodicals, Inc.
Entrainment to the CIECAM02 and CIELAB colour appearance models in the human cortex.
Thwaites, Andrew; Wingfield, Cai; Wieser, Eric; Soltan, Andrew; Marslen-Wilson, William D; Nimmo-Smith, Ian
2018-04-01
In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as colour are derived. The sequence of transforms involved in constructing perceptions of colour can be approximated by colour appearance models such as the CIE (2002) colour appearance model, abbreviated as CIECAM02. In this study, we test the plausibility of CIECAM02 as a model of colour processing by looking for evidence of its cortical entrainment. The CIECAM02 model predicts that colour is split in to two opposing chromatic components, red-green and cyan-yellow (termed CIECAM02-a and CIECAM02-b respectively), and an achromatic component (termed CIECAM02-A). Entrainment of cortical activity to the outputs of these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots changing colour. We find entrainment to chromatic component CIECAM02-a at approximately 35 ms latency bilaterally in occipital lobe regions, and entrainment to achromatic component CIECAM02-A at approximately 75 ms latency, also bilaterally in occipital regions. For comparison, transforms from a less physiologically plausible model (CIELAB) were also tested, with no significant entrainment found. Copyright © 2018 Elsevier Ltd. All rights reserved.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V. Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks. PMID:28680395
Computational analyses in cognitive neuroscience: in defense of biological implausibility.
Dror, I E; Gallogly, D P
1999-06-01
Because cognitive neuroscience researchers attempt to understand the human mind by bridging behavior and brain, they expect computational analyses to be biologically plausible. In this paper, biologically implausible computational analyses are shown to have critical and essential roles in the various stages and domains of cognitive neuroscience research. Specifically, biologically implausible computational analyses can contribute to (1) understanding and characterizing the problem that is being studied, (2) examining the availability of information and its representation, and (3) evaluating and understanding the neuronal solution. In the context of the distinct types of contributions made by certain computational analyses, the biological plausibility of those analyses is altogether irrelevant. These biologically implausible models are nevertheless relevant and important for biologically driven research.
Do massive compact objects without event horizon exist in infinite derivative gravity?
NASA Astrophysics Data System (ADS)
Koshelev, Alexey S.; Mazumdar, Anupam
2017-10-01
Einstein's general theory of relativity is plagued by cosmological and black-hole type singularities Recently, it has been shown that infinite derivative, ghost free, gravity can yield nonsingular cosmological and mini-black hole solutions. In particular, the theory possesses a mass-gap determined by the scale of new physics. This paper provides a plausible argument, not a no-go theorem, based on the Area-law of gravitational entropy that within infinite derivative, ghost free, gravity nonsingular compact objects in the static limit need not have horizons.
Functionality limit of classical simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2015-09-01
By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.
Plausibility Judgments in Conceptual Change and Epistemic Cognition
ERIC Educational Resources Information Center
Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.
2016-01-01
Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…
Quantum computation with indefinite causal structures
NASA Astrophysics Data System (ADS)
Araújo, Mateus; Guérin, Philippe Allard; Baumeler, ńmin
2017-11-01
One way to study the physical plausibility of closed timelike curves (CTCs) is to examine their computational power. This has been done for Deutschian CTCs (D-CTCs) and postselection CTCs (P-CTCs), with the result that they allow for the efficient solution of problems in PSPACE and PP, respectively. Since these are extremely powerful complexity classes, which are not expected to be solvable in reality, this can be taken as evidence that these models for CTCs are pathological. This problem is closely related to the nonlinearity of this models, which also allows, for example, cloning quantum states, in the case of D-CTCs, or distinguishing nonorthogonal quantum states, in the case of P-CTCs. In contrast, the process matrix formalism allows one to model indefinite causal structures in a linear way, getting rid of these effects and raising the possibility that its computational power is rather tame. In this paper, we show that process matrices correspond to a linear particular case of P-CTCs, and therefore that its computational power is upperbounded by that of PP. We show, furthermore, a family of processes that can violate causal inequalities but nevertheless can be simulated by a causally ordered quantum circuit with only a constant overhead, showing that indefinite causality is not necessarily hard to simulate.
Profiling outcomes of ambulatory care: casemix affects perceived performance.
Berlowitz, D R; Ash, A S; Hickey, E C; Kader, B; Friedman, R; Moskowitz, M A
1998-06-01
The authors explored the role of casemix adjustment when profiling outcomes of ambulatory care. The authors reviewed the medical records of 656 patients with hypertension, diabetes, or chronic obstructive pulmonary disease (COPD) receiving care at one of three Department of Veterans Affairs medical centers. Outcomes included measures of physiological control for hypertension and diabetes, and of exacerbations for COPD. Predictors of poor outcomes, including physical examination findings, symptoms, and comorbidities, were identified and entered into regression models. Observed minus expected performance was described for each site, both before and after casemix adjustment. Risk-adjustment models were developed that were clinically plausible and had good performance properties. Differences existed among the three sites in the severity of the patients being cared for. For example, the percentage of patients expected to have poor blood pressure control were 35% at site 1, 37% at site 2, and 44% at site 3 (P < 0.01). Casemix-adjusted measures of performance were different from unadjusted measures. Sites that were outliers (P < 0.05) with one approach had observed performance no different from expected with another approach. Casemix adjustment models can be developed for outpatient medical conditions. Sites differ in the severity of patients they treat, and adjusting for these differences can alter judgments of site performance. Casemix adjustment is necessary when profiling outpatient medical conditions.
Psikuta, Agnes; Koelblen, Barbara; Mert, Emel; Fontana, Piero; Annaheim, Simon
2017-12-07
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications.
Angular momentum transfer in primordial discs and the rotation of the first stars
NASA Astrophysics Data System (ADS)
Hirano, Shingo; Bromm, Volker
2018-05-01
We investigate the rotation velocity of the first stars by modelling the angular momentum transfer in the primordial accretion disc. Assessing the impact of magnetic braking, we consider the transition in angular momentum transport mode at the Alfvén radius, from the dynamically dominated free-fall accretion to the magnetically dominated solid-body one. The accreting protostar at the centre of the primordial star-forming cloud rotates with close to breakup speed in the case without magnetic fields. Considering a physically motivated model for small-scale turbulent dynamo amplification, we find that stellar rotation speed quickly declines if a large fraction of the initial turbulent energy is converted to magnetic energy (≳ 0.14). Alternatively, if the dynamo process were inefficient, for amplification due to flux freezing, stars would become slow rotators if the pre-galactic magnetic field strength is above a critical value, ≃10-8.2 G, evaluated at a scale of nH = 1 cm-3, which is significantly higher than plausible cosmological seed values (˜10-15 G). Because of the rapid decline of the stellar rotational speed over a narrow range in model parameters, the first stars encounter a bimodal fate: rapid rotation at almost the breakup level, or the near absence of any rotation.
PSIKUTA, Agnes; KOELBLEN, Barbara; MERT, Emel; FONTANA, Piero; ANNAHEIM, Simon
2017-01-01
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications. PMID:28966294
A model for intergalactic filaments and galaxy formation during the first gigayear
NASA Astrophysics Data System (ADS)
Harford, A. Gayler; Hamilton, Andrew J. S.
2017-11-01
We propose a physically based, analytic model for intergalactic filaments during the first gigayear of the universe. The structure of a filament is based upon a gravitationally bound, isothermal cylinder of gas. The model successfully predicts for a cosmological simulation the total mass per unit length of a filament (dark matter plus gas) based solely upon the sound speed of the gas component, contrary to the expectation for collisionless dark matter aggregation. In the model, the gas, through its hydrodynamic properties, plays a key role in filament structure rather than being a passive passenger in a preformed dark matter potential. The dark matter of a galaxy follows the classic equation of collapse of a spherically symmetric overdensity in an expanding universe. In contrast, the gas usually collapses more slowly. The relative rates of collapse of these two components for individual galaxies can explain the varying baryon deficits of the galaxies under the assumption that matter moves along a single filament passing through the galaxy centre, rather than by spherical accretion. The difference in behaviour of the dark matter and gas can be simply and plausibly related to the model. The range of galaxies studied includes that of the so-called too big to fail galaxies, which are thought to be problematic for the standard Λ cold dark matter model of the universe. The isothermal-cylinder model suggests a simple explanation for why these galaxies are, unaccountably, missing from the night sky.
Elastic Wave Imaging of in-Situ Bio-Alterations in a Contaminated Aquifer
NASA Astrophysics Data System (ADS)
Jaiswal, P.; Raj, R.; Atekwana, E. A.; Briand, B.; Alam, I.
2014-12-01
We present a pioneering report on the utility of seismic methods in imaging bio-induced elastic property changes within a contaminated aquifer. To understand physical properties of contaminated soil, we acquired 48 meters long multichannel seismic profile over the Norman landfill leachate plume in Norman Oklahoma, USA. We estimated both the P- and S- wave velocities respectively using full-waveform inversion of the transmission and the ground-roll coda. The resulting S-wave model showed distinct velocity anomaly (~10% over background) within the water table fluctuation zone bounded by the historical minimum and maximum groundwater table. In comparison, the P-wave velocity anomaly within the same zone was negligible. The Environmental Scanning Electron Microscope (ESEM) images of samples from a core located along the seismic profile clearly shows presence of biofilms in the water table fluctuation zone and their absence both above and below the fluctuation zone. Elemental chemistry further indicates that the sediment composition throughout the core is fairly constant. We conclude that the velocity anomaly in S-wave is due to biofilms. As a next step, we develop mechanistic modeling to gain insights into the petro-physical behavior of biofilm-bearing sediments. Preliminary results suggest that a plausible model could be biofilms acting as contact cement between sediment grains. The biofilm cement can be placed in two ways - (i) superficial non-contact deposition on sediment grains, and (ii) deposition at grain contacts. Both models explain P- and S- wave velocity structure at reasonable (~5-10%) biofilm saturation and are equivocally supported by the ESEM images. Ongoing attenuation modeling from full-waveform inversion and its mechanistic realization, may be able to further discriminate between the two cement models. Our study strongly suggests that as opposed to the traditional P-wave seismic, S-wave acquisition and imaging can be a more powerful tool for in-situ imaging of biofilm formation in field settings with significant implication for bioremediation and microbial enhanced oil recovery monitoring.
Identifying Asteroidal Parent Bodies of the Meteorites: The Last Lap
NASA Technical Reports Server (NTRS)
Gaffey, M. J.
2000-01-01
Spectral studies of asteroids and dynamical models have converged to yield, at last, a clear view of asteroid-meteorite linkages. Plausible parent bodies for most meteorite types have either been identified or it has become evident where to search for them.
Simulating direct shear tests with the Bullet physics library: A validation study.
Izadi, Ehsan; Bezuijen, Adam
2018-01-01
This study focuses on the possible uses of physics engines, and more specifically the Bullet physics library, to simulate granular systems. Physics engines are employed extensively in the video gaming, animation and movie industries to create physically plausible scenes. They are designed to deliver a fast, stable, and optimal simulation of certain systems such as rigid bodies, soft bodies and fluids. This study focuses exclusively on simulating granular media in the context of rigid body dynamics with the Bullet physics library. The first step was to validate the results of the simulations of direct shear testing on uniform-sized metal beads on the basis of laboratory experiments. The difference in the average angle of mobilized frictions was found to be only 1.0°. In addition, a very close match was found between dilatancy in the laboratory samples and in the simulations. A comprehensive study was then conducted to determine the failure and post-failure mechanism. We conclude with the presentation of a simulation of a direct shear test on real soil which demonstrated that Bullet has all the capabilities needed to be used as software for simulating granular systems.
Pasanen, Tytti P; Tyrväinen, Liisa; Korpela, Kalevi M
2014-01-01
Background: A body of evidence shows that both physical activity and exposure to nature are connected to improved general and mental health. Experimental studies have consistently found short term positive effects of physical activity in nature compared with built environments. This study explores whether these benefits are also evident in everyday life, perceived over repeated contact with nature. The topic is important from the perspectives of city planning, individual well-being, and public health. Methods: National survey data (n = 2,070) from Finland was analysed using structural regression analyses. Perceived general health, emotional well-being, and sleep quality were regressed on the weekly frequency of physical activity indoors, outdoors in built environments, and in nature. Socioeconomic factors and other plausible confounders were controlled for. Results: Emotional well-being showed the most consistent positive connection to physical activity in nature, whereas general health was positively associated with physical activity in both built and natural outdoor settings. Better sleep quality was weakly connected to frequent physical activity in nature, but the connection was outweighed by other factors. Conclusion: The results indicate that nature provides an added value to the known benefits of physical activity. Repeated exercise in nature is, in particular, connected to better emotional well-being. PMID:25044598
Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection
Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo
2011-01-01
Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....
Alternative supply specifications and estimates of regional supply and demand for stumpage.
Kent P. Connaughton; David H. Jackson; Gerard A. Majerus
1988-01-01
Four plausible sets of stumpage supply and demand equations were developed and estimated; the demand equation was the same for each set, although the supply equation differed. The supply specifications varied from the model of regional excess demand in which National Forest harvest levels were assumed fixed to a more realistic model in which the harvest on the National...
Bowers, Jeffrey S
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.
Radiative heating of interstellar grains falling toward the solar nebula: 1-D diffusion calculations
NASA Technical Reports Server (NTRS)
Simonelli, D. P.; Pollack, J. B.; McKay, C. P.
1997-01-01
As the dense molecular cloud that was the precursor of our Solar System was collapsing to form a protosun and the surrounding solar-nebula accretion disk, infalling interstellar grains were heated much more effectively by radiation from the forming protosun than by radiation from the disk's accretion shock. Accordingly, we have estimated the temperatures experienced by these infalling grains using radiative diffusion calculations whose sole energy source is radiation from the protosun. Although the calculations are 1-dimensional, they make use of 2-D, cylindrically symmetric models of the density structure of a collapsing, rotating cloud. The temperature calculations also utilize recent models for the composition and radiative properties of interstellar grains (Pollack et al. 1994. Astrophys. J. 421, 615-639), thereby allowing us to estimate which grain species might have survived, intact, to the disk accretion shock and what accretion rates and molecular-cloud rotation rates aid that survival. Not surprisingly, we find that the large uncertainties in the free parameter values allow a wide range of grain-survival results: (1) For physically plausible high accretion rates or low rotation rates (which produce small accretion disks), all of the infalling grain species, even the refractory silicates and iron, will vaporize in the protosun's radiation field before reaching the disk accretion shock. (2) For equally plausible low accretion rates or high rotation rates (which produce large accretion disks), all non-ice species, even volatile organics, will survive intact to the disk accretion shock. These grain-survival conclusions are subject to several limitations which need to be addressed by future, more sophisticated radiative-transfer models. Nevertheless, our results can serve as useful inputs to models of the processing that interstellar grains undergo at the solar nebula's accretion shock, and thus help address the broader question of interstellar inheritance in the solar nebula and present Solar System. These results may also help constrain the size of the accretion disk: for example, if we require that the calculations produce partial survival of organic grains into the solar nebula, we infer that some material entered the disk intact at distances comparable to or greater than a few AU. Intriguingly, this is comparable to the heliocentric distance that separates the C-rich outer parts of the current Solar System from the C-poor inner regions.
Simonelli, D P; Pollack, J B; McKay, C P
1997-02-01
As the dense molecular cloud that was the precursor of our Solar System was collapsing to form a protosun and the surrounding solar-nebula accretion disk, infalling interstellar grains were heated much more effectively by radiation from the forming protosun than by radiation from the disk's accretion shock. Accordingly, we have estimated the temperatures experienced by these infalling grains using radiative diffusion calculations whose sole energy source is radiation from the protosun. Although the calculations are 1-dimensional, they make use of 2-D, cylindrically symmetric models of the density structure of a collapsing, rotating cloud. The temperature calculations also utilize recent models for the composition and radiative properties of interstellar grains (Pollack et al. 1994. Astrophys. J. 421, 615-639), thereby allowing us to estimate which grain species might have survived, intact, to the disk accretion shock and what accretion rates and molecular-cloud rotation rates aid that survival. Not surprisingly, we find that the large uncertainties in the free parameter values allow a wide range of grain-survival results: (1) For physically plausible high accretion rates or low rotation rates (which produce small accretion disks), all of the infalling grain species, even the refractory silicates and iron, will vaporize in the protosun's radiation field before reaching the disk accretion shock. (2) For equally plausible low accretion rates or high rotation rates (which produce large accretion disks), all non-ice species, even volatile organics, will survive intact to the disk accretion shock. These grain-survival conclusions are subject to several limitations which need to be addressed by future, more sophisticated radiative-transfer models. Nevertheless, our results can serve as useful inputs to models of the processing that interstellar grains undergo at the solar nebula's accretion shock, and thus help address the broader question of interstellar inheritance in the solar nebula and present Solar System. These results may also help constrain the size of the accretion disk: for example, if we require that the calculations produce partial survival of organic grains into the solar nebula, we infer that some material entered the disk intact at distances comparable to or greater than a few AU. Intriguingly, this is comparable to the heliocentric distance that separates the C-rich outer parts of the current Solar System from the C-poor inner regions.
Body shape helps legged robots climb and turn in complex 3-D terrains
NASA Astrophysics Data System (ADS)
Han, Yuanfeng; Wang, Zheliang; Li, Chen
Analogous to streamlined shapes that reduce drag in fluids, insects' ellipsoid-like rounded body shapes were recently discovered to be ``terradynamically streamlined'' and enhance locomotion in cluttered terrain by facilitating body rolling. Here, we hypothesize that there exist more terradynamic shapes that facilitate other modes of locomotion like climbing and turning in complex 3-D terrains by facilitating body pitching and yawing. To test our hypothesis, we modified the body shape of a legged robot by adding an elliptical and a rectangular shell and tested how it negotiated with circular and square vertical pillars. With a rectangular shell the robot always pitched against square pillars in an attempt to climb, whereas with an elliptical shell it always yawed and turned away from circular pillars given a small initial lateral displacement. Square / circular pillars facilitated pitching / yawing, respectively. To begin to reveal the contact physics, we developed a locomotion energy landscape model. Our model revealed that potential energy barriers to transition from pitching to yawing are high for angular locomotor and obstacle shapes (rectangular / square) but vanish for rounded shapes (elliptical / circular). Our study supports the plausibility of locomotion energy landscapes for understanding the rich locomotor transitions in complex 3-D terrains.
Gamma-ray Burst Prompt Correlations: Selection and Instrumental Effects
NASA Astrophysics Data System (ADS)
Dainotti, M. G.; Amati, L.
2018-05-01
The prompt emission mechanism of gamma-ray bursts (GRB) even after several decades remains a mystery. However, it is believed that correlations between observable GRB properties, given their huge luminosity/radiated energy and redshift distribution extending up to at least z ≈ 9, are promising possible cosmological tools. They also may help to discriminate among the most plausible theoretical models. Nowadays, the objective is to make GRBs standard candles, similar to supernovae (SNe) Ia, through well-established and robust correlations. However, differently from SNe Ia, GRBs span over several order of magnitude in their energetics, hence they cannot yet be considered standard candles. Additionally, being observed at very large distances, their physical properties are affected by selection biases, the so-called Malmquist bias or Eddington effect. We describe the state of the art on how GRB prompt correlations are corrected for these selection biases to employ them as redshift estimators and cosmological tools. We stress that only after an appropriate evaluation and correction for these effects, GRB correlations can be used to discriminate among the theoretical models of prompt emission, to estimate the cosmological parameters and to serve as distance indicators via redshift estimation.
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.
Plausible carrier transport model in organic-inorganic hybrid perovskite resistive memory devices
NASA Astrophysics Data System (ADS)
Park, Nayoung; Kwon, Yongwoo; Choi, Jaeho; Jang, Ho Won; Cha, Pil-Ryung
2018-04-01
We demonstrate thermally assisted hopping (TAH) as an appropriate carrier transport model for CH3NH3PbI3 resistive memories. Organic semiconductors, including organic-inorganic hybrid perovskites, have been previously speculated to follow the space-charge-limited conduction (SCLC) model. However, the SCLC model cannot reproduce the temperature dependence of experimental current-voltage curves. Instead, the TAH model with temperature-dependent trap densities and a constant trap level are demonstrated to well reproduce the experimental results.
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-06-11
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.
Source Effects and Plausibility Judgments When Reading about Climate Change
ERIC Educational Resources Information Center
Lombardi, Doug; Seyranian, Viviane; Sinatra, Gale M.
2014-01-01
Gaps between what scientists and laypeople find plausible may act as a barrier to learning complex and/or controversial socioscientific concepts. For example, individuals may consider scientific explanations that human activities are causing current climate change as implausible. This plausibility judgment may be due-in part-to individuals'…
Plausibility and Perspective Influence the Processing of Counterfactual Narratives
ERIC Educational Resources Information Center
Ferguson, Heather J.; Jayes, Lewis T.
2018-01-01
Previous research has established that readers' eye movements are sensitive to the difficulty with which a word is processed. One important factor that influences processing is the fit of a word within the wider context, including its plausibility. Here we explore the influence of plausibility in counterfactual language processing. Counterfactuals…
NASA Astrophysics Data System (ADS)
Kurosawa, Kosuke; Okamoto, Takaya; Genda, Hidenori
2018-02-01
Hypervelocity ejection of material by impact spallation is considered a plausible mechanism for material exchange between two planetary bodies. We have modeled the spallation process during vertical impacts over a range of impact velocities from 6 to 21 km/s using both grid- and particle-based hydrocode models. The Tillotson equations of state, which are able to treat the nonlinear dependence of density on pressure and thermal pressure in strongly shocked matter, were used to study the hydrodynamic-thermodynamic response after impacts. The effects of material strength and gravitational acceleration were not considered. A two-dimensional time-dependent pressure field within a 1.5-fold projectile radius from the impact point was investigated in cylindrical coordinates to address the generation of spalled material. A resolution test was also performed to reject ejected materials with peak pressures that were too low due to artificial viscosity. The relationship between ejection velocity veject and peak pressure Ppeak was also derived. Our approach shows that "late-stage acceleration" in an ejecta curtain occurs due to the compressible nature of the ejecta, resulting in an ejection velocity that can be higher than the ideal maximum of the resultant particle velocity after passage of a shock wave. We also calculate the ejecta mass that can escape from a planet like Mars (i.e., veject > 5 km/s) that matches the petrographic constraints from Martian meteorites, and which occurs when Ppeak = 30-50 GPa. Although the mass of such ejecta is limited to 0.1-1 wt% of the projectile mass in vertical impacts, this is sufficient for spallation to have been a plausible mechanism for the ejection of Martian meteorites. Finally, we propose that impact spallation is a plausible mechanism for the generation of tektites.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evolution of speech and evolution of language.
de Boer, Bart
2017-02-01
Speech is the physical signal used to convey spoken language. Because of its physical nature, speech is both easier to compare with other species' behaviors and easier to study in the fossil record than other aspects of language. Here I argue that convergent fossil evidence indicates adaptations for complex vocalizations at least as early as the common ancestor of Neanderthals and modern humans. Furthermore, I argue that it is unlikely that language evolved separately from speech, but rather that gesture, speech, and song coevolved to provide both a multimodal communication system and a musical system. Moreover, coevolution must also have played a role by allowing both cognitive and anatomical adaptations to language and speech to evolve in parallel. Although such a coevolutionary scenario is complex, it is entirely plausible from a biological point of view.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Accounting for misreporting when comparing energy intake across time in Canada.
Garriguet, Didier
2018-05-16
Estimates of energy intake are lower in 2015 compared with 2004. The difference observed is too large to be explained by a change in energy requirements or physical activity at the population level. Self-reported dietary intake is subject to misreporting and may explain part of this difference. The objectives of this study are to assess how misreporting has changed from 2004 to 2015 and to demonstrate how these changes may affect the interpretation of the national intake data of Canadians. Data from the 2004 Canadian Community Health Survey - Nutrition (CCHS - Nutrition) and the 2015 CCHS - Nutrition were used to estimate energy intake and requirements for all participants aged 2 or older. The ratio of energy intake to total energy expenditure requirements (EI:TEE) was used to categorize respondents as under-reporters (EI:TEE ⟨ 0.70), over-reporters (EI:TEE ⟩ 1.42) or plausible reporters (EI:TEE = 0.70 to 1.42). Descriptive analyses by category of respondent were conducted for respondents aged 2 or older who participated in the measured height and weight component. The main caloric sources that contributed to the difference in estimated energy requirements were used to show the impact of misreporting on the analysis. The prevalence of under-reporters was 7.5% higher in 2015 compared with 2004, while the prevalence of over-reporters was 7.4% lower. There was no change in the prevalence of plausible reporters. Estimated energy intake from participants categorized as plausible reporters showed a difference of 84 kcal from 2004 to 2015, compared with a difference of 250 kcal for the entire sample. Estimated energy intake was lower in 2015 compared with 2004 across all categories of respondents for many foods, including sugar-sweetened beverages and milk, and was higher for only pastries and nuts. Misreporting changes will affect analysis and should, at a minimum, be acknowledged when comparing 2015 with 2004. Using a comparable category of plausible reporters or adjusting for reporting status are options that will allow a better comparison of these two datasets.
Maakip, Ismail; Keegel, Tessa; Oakman, Jodi
2017-04-01
Prevalence and predictors associated with musculoskeletal disorders (MSDs) vary considerably between countries. It is plausible that socio-cultural contexts may contribute to these differences. We conducted a cross-sectional survey with 1184 Malaysian and Australian office workers with the aim to examine predictors associated with MSD discomfort. The 6-month period prevalence of self-reported MSD discomfort for Malaysian office workers was 92.8% and 71.2% among Australian workers. In Malaysia, a model regressing level of musculoskeletal discomfort against possible risk factors was significant overall (F [6, 370] = 17.35; p < 0.001) and explained 22% (r = 0.46) of its variance. MSD discomfort was significantly associated with predictors that included gender (β = 14), physical (β = 0.38) and psychosocial hazards (β = -0.10), and work-life balance (β = -0.13). In Australia, the regression model is also significant (F [6, 539] = 16.47; p < 0.001) with the model explaining 15.5% (r = 0.39) of the variance in MSD discomfort. Predictors such as gender (β = 0.14), physical (β = 24) and psychosocial hazards (β = -0.17), were associated with MSD discomfort in Australian office workers. Predictors associated with MSD discomfort were similar, but their relative importance differed. Work-life balance was significantly associated with increased MSD discomfort for the Malaysian population only. Design and implementation of MSD risk management needs to take into account the work practices and culture of the target population. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Nassios, Jason; Giesecke, James A
2018-04-01
Economic consequence analysis is one of many inputs to terrorism contingency planning. Computable general equilibrium (CGE) models are being used more frequently in these analyses, in part because of their capacity to accommodate high levels of event-specific detail. In modeling the potential economic effects of a hypothetical terrorist event, two broad sets of shocks are required: (1) physical impacts on observable variables (e.g., asset damage); (2) behavioral impacts on unobservable variables (e.g., investor uncertainty). Assembling shocks describing the physical impacts of a terrorist incident is relatively straightforward, since estimates are either readily available or plausibly inferred. However, assembling shocks describing behavioral impacts is more difficult. Values for behavioral variables (e.g., required rates of return) are typically inferred or estimated by indirect means. Generally, this has been achieved via reference to extraneous literature or ex ante surveys. This article explores a new method. We elucidate the magnitude of CGE-relevant structural shifts implicit in econometric evidence on terrorist incidents, with a view to informing future ex ante event assessments. Ex post econometric studies of terrorism by Blomberg et al. yield macro econometric equations that describe the response of observable economic variables (e.g., GDP growth) to terrorist incidents. We use these equations to determine estimates for relevant (unobservable) structural and policy variables impacted by terrorist incidents, using a CGE model of the United States. This allows us to: (i) compare values for these shifts with input assumptions in earlier ex ante CGE studies; and (ii) discuss how future ex ante studies can be informed by our analysis. © 2017 Society for Risk Analysis.
Subsurface Scenarios: What are We Trying to Model?
In collaboration with the Lawrence Berkeley National Lab (George Moridis and team),and after a thorough review of the scientific literature and data and interviews with a selection of experts on the topic, a finite number of plausible scenarios were selected for more quantitative...
Embodied Design: Constructing Means for Constructing Meaning
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
Design-based research studies are conducted as iterative implementation-analysis-modification cycles, in which emerging theoretical models and pedagogically plausible activities are reciprocally tuned toward each other as a means of investigating conjectures pertaining to mechanisms underlying content teaching and learning. Yet this approach, even…
NASA Astrophysics Data System (ADS)
Lee, Benjamin Seiyon; Haran, Murali; Keller, Klaus
2017-10-01
Storm surges are key drivers of coastal flooding, which generate considerable risks. Strategies to manage these risks can hinge on the ability to (i) project the return periods of extreme storm surges and (ii) detect potential changes in their statistical properties. There are several lines of evidence linking rising global average temperatures and increasingly frequent extreme storm surges. This conclusion is, however, subject to considerable structural uncertainty. This leads to two main questions: What are projections under various plausible statistical models? How long would it take to distinguish among these plausible statistical models? We address these questions by analyzing observed and simulated storm surge data. We find that (1) there is a positive correlation between global mean temperature rise and increasing frequencies of extreme storm surges; (2) there is considerable uncertainty underlying the strength of this relationship; and (3) if the frequency of storm surges is increasing, this increase can be detected within a multidecadal timescale (≈20 years from now).
Lathrop, R H; Casale, M; Tobias, D J; Marsh, J L; Thompson, L M
1998-01-01
We describe a prototype system (Poly-X) for assisting an expert user in modeling protein repeats. Poly-X reduces the large number of degrees of freedom required to specify a protein motif in complete atomic detail. The result is a small number of parameters that are easily understood by, and under the direct control of, a domain expert. The system was applied to the polyglutamine (poly-Q) repeat in the first exon of huntingtin, the gene implicated in Huntington's disease. We present four poly-Q structural motifs: two poly-Q beta-sheet motifs (parallel and antiparallel) that constitute plausible alternatives to a similar previously published poly-Q beta-sheet motif, and two novel poly-Q helix motifs (alpha-helix and pi-helix). To our knowledge, helical forms of polyglutamine have not been proposed before. The motifs suggest that there may be several plausible aggregation structures for the intranuclear inclusion bodies which have been found in diseased neurons, and may help in the effort to understand the structural basis for Huntington's disease.
Semantic and Plausibility Preview Benefit Effects in English: Evidence from Eye Movements
Schotter, Elizabeth R.; Jia, Annie
2016-01-01
Theories of preview benefit in reading hinge on integration across saccades and the idea that preview benefit is greater the more similar the preview and target are. Schotter (2013) reported preview benefit from a synonymous preview, but it is unclear whether this effect occurs because of similarity between the preview and target (integration), or because of contextual fit of the preview—synonyms satisfy both accounts. Studies in Chinese have found evidence for preview benefit for words that are unrelated to the target, but are contextually plausible (Yang, Li, Wang, Slattery, & Rayner, 2014; Yang, Wang, Tong, & Rayner, 2012), which is incompatible with an integration account but supports a contextual fit account. Here, we used plausible and implausible unrelated previews in addition to plausible synonym, antonym, and identical previews to further investigate these accounts for readers of English. Early reading measures were shorter for all plausible preview conditions compared to the implausible preview condition. In later reading measures, a benefit for the plausible unrelated preview condition was not observed. In a second experiment, we asked questions that probed whether the reader encoded the preview or target. Readers were more likely to report the preview when they had skipped the word and not regressed to it, and when the preview was plausible. Thus, under certain circumstances, the preview word is processed to a high level of representation (i.e., semantic plausibility) regardless of its relationship to the target, but its influence on reading is relatively short-lived, being replaced by the target word, when fixated. PMID:27123754
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial
2015-08-01
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
ERIC Educational Resources Information Center
Gauld, Colin
1998-01-01
Reports that many students do not believe Newton's law of action and reaction and suggests ways in which its plausibility might be enhanced. Reviews how this law has been made more plausible over time by Newton and those who succeeded him. Contains 25 references. (DDR)
Plausibility Reappraisals and Shifts in Middle School Students' Climate Change Conceptions
ERIC Educational Resources Information Center
Lombardi, Doug; Sinatra, Gale M.; Nussbaum, E. Michael
2013-01-01
Plausibility is a central but under-examined topic in conceptual change research. Climate change is an important socio-scientific topic; however, many view human-induced climate change as implausible. When learning about climate change, students need to make plausibility judgments but they may not be sufficiently critical or reflective. The…
Using an agent-based model to simulate children’s active travel to school
2013-01-01
Background Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children’s active travel to school. Methods An agent-based model was developed to simulate children’s school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. Results To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Conclusions Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school. PMID:23705953
Using an agent-based model to simulate children's active travel to school.
Yang, Yong; Diez-Roux, Ana V
2013-05-26
Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children's active travel to school. An agent-based model was developed to simulate children's school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school.
Panter, Jenna; Ogilvie, David
2015-01-01
Objective Some studies have assessed the effectiveness of environmental interventions to promote physical activity, but few have examined how such interventions work. We investigated the environmental mechanisms linking an infrastructural intervention with behaviour change. Design Natural experimental study. Setting Three UK municipalities (Southampton, Cardiff and Kenilworth). Participants Adults living within 5 km of new walking and cycling infrastructure. Intervention Construction or improvement of walking and cycling routes. Exposure to the intervention was defined in terms of residential proximity. Outcome measures Questionnaires at baseline and 2-year follow-up assessed perceptions of the supportiveness of the environment, use of the new infrastructure, and walking and cycling behaviours. Analysis proceeded via factor analysis of perceptions of the physical environment (step 1) and regression analysis to identify plausible pathways involving physical and social environmental mediators and refine the intervention theory (step 2) to a final path analysis to test the model (step 3). Results Participants who lived near and used the new routes reported improvements in their perceptions of provision and safety. However, path analysis (step 3, n=967) showed that the effects of the intervention on changes in time spent walking and cycling were largely (90%) explained by a simple causal pathway involving use of the new routes, and other pathways involving changes in environmental cognitions explained only a small proportion of the effect. Conclusions Physical improvement of the environment itself was the key to the effectiveness of the intervention, and seeking to change people's perceptions may be of limited value. Studies of how interventions lead to population behaviour change should complement those concerned with estimating their effects in supporting valid causal inference. PMID:26338837
Effect of lecture instruction on student performance on qualitative questions
NASA Astrophysics Data System (ADS)
Heron, Paula R. L.
2015-06-01
The impact of lecture instruction on student conceptual understanding in physics has been the subject of research for several decades. Most studies have reported disappointingly small improvements in student performance on conceptual questions despite direct instruction on the relevant topics. These results have spurred a number of attempts to improve learning in physics courses through new curricula and instructional techniques. This paper contributes to the research base through a retrospective analysis of 20 randomly selected qualitative questions on topics in kinematics, dynamics, electrostatics, waves, and physical optics that have been given in introductory calculus-based physics at the University of Washington over a period of 15 years. In some classes, questions were administered after relevant lecture instruction had been completed; in others, it had yet to begin. Simple statistical tests indicate that the average performance of the "after lecture" classes was significantly better than that of the "before lecture" classes for 11 questions, significantly worse for two questions, and indistinguishable for the remaining seven. However, the classes had not been randomly assigned to be tested before or after lecture instruction. Multiple linear regression was therefore conducted with variables (such as class size) that could plausibly lead to systematic differences in performance and thus obscure (or artificially enhance) the effect of lecture instruction. The regression models support the results of the simple tests for all but four questions. In those cases, the effect of lecture instruction was reduced to a nonsignificant level, or increased to a significant, negative level when other variables were considered. Thus the results provide robust evidence that instruction in lecture can increase student ability to give correct answers to conceptual questions but does not necessarily do so; in some cases it can even lead to a decrease.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-01
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-29
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, T. A.; Moskalenko, I. V.; Jóhannesson, G., E-mail: tporter@stanford.edu
High-energy γ -rays of interstellar origin are produced by the interaction of cosmic-ray (CR) particles with the diffuse gas and radiation fields in the Galaxy. The main features of this emission are well understood and are reproduced by existing CR propagation models employing 2D galactocentric cylindrically symmetrical geometry. However, the high-quality data from instruments like the Fermi Large Area Telescope reveal significant deviations from the model predictions on few to tens of degrees scales, indicating the need to include the details of the Galactic spiral structure and thus requiring 3D spatial modeling. In this paper, the high-energy interstellar emissions frommore » the Galaxy are calculated using the new release of the GALPROP code employing 3D spatial models for the CR source and interstellar radiation field (ISRF) densities. Three models for the spatial distribution of CR sources are used that are differentiated by their relative proportion of input luminosity attributed to the smooth disk or spiral arms. Two ISRF models are developed based on stellar and dust spatial density distributions taken from the literature that reproduce local near- to far-infrared observations. The interstellar emission models that include arms and bulges for the CR source and ISRF densities provide plausible physical interpretations for features found in the residual maps from high-energy γ -ray data analysis. The 3D models for CR and ISRF densities provide a more realistic basis that can be used for the interpretation of the nonthermal interstellar emissions from the Galaxy.« less
NASA Astrophysics Data System (ADS)
Plumlee, G. S.; Morman, S. A.; Alpers, C. N.; Hoefen, T. M.; Meeker, G. P.
2010-12-01
Disasters commonly pose immediate threats to human safety, but can also produce hazardous materials (HM) that pose short- and long-term environmental-health threats. The U.S. Geological Survey (USGS) has helped assess potential environmental health characteristics of HM produced by various natural and anthropogenic disasters, such as the 2001 World Trade Center collapse, 2005 hurricanes Katrina and Rita, 2007-2009 southern California wildfires, various volcanic eruptions, and others. Building upon experience gained from these responses, we are now developing methods to anticipate plausible environmental and health implications of the 2008 Great Southern California ShakeOut scenario (which modeled the impacts of a 7.8 magnitude earthquake on the southern San Andreas fault, http://urbanearth.gps.caltech.edu/scenario08/), and the recent ARkStorm scenario (modeling the impacts of a major, weeks-long winter storm hitting nearly all of California, http://urbanearth.gps.caltech.edu/winter-storm/). Environmental-health impacts of various past earthquakes and extreme storms are first used to identify plausible impacts that could be associated with the disaster scenarios. Substantial insights can then be gleaned using a Geographic Information Systems (GIS) approach to link ShakeOut and ARkStorm effects maps with data extracted from diverse database sources containing geologic, hazards, and environmental information. This type of analysis helps constrain where potential geogenic (natural) and anthropogenic sources of HM (and their likely types of contaminants or pathogens) fall within areas of predicted ShakeOut-related shaking, firestorms, and landslides, and predicted ARkStorm-related precipitation, flooding, and winds. Because of uncertainties in the event models and many uncertainties in the databases used (e.g., incorrect location information, lack of detailed information on specific facilities, etc.) this approach should only be considered as the first of multiple steps toward a more quantitative, predictive approach to understanding the potential sources, types, environmental behavior, and health implications of HM predicted to result from these disaster scenarios. Although only a first step, this qualitative approach will help enhance planning for, mitigation of, and resilience to environmental-health consequences of future disasters. This qualitative approach also requires careful communication to stakeholders that does not sensationalize or overstate potential problems, but rather conveys plausible impacts and next steps to improve understanding of potential risks and their mitigation.
ERIC Educational Resources Information Center
Maxwell, Jane Carlisle; Pullum, Thomas W.
2001-01-01
Applied the capture-recapture model, through a Poisson regression to a time series of data for admissions to treatment from 1987 to 1996 to estimate the number of heroin addicts in Texas who are "at-risk" for treatment. The entire data set produced estimates that were lower and more plausible than those produced by drawing samples,…
Modelling the time-dependent frequency content of low-frequency volcanic earthquakes
NASA Astrophysics Data System (ADS)
Jousset, Philippe; Neuberg, Jürgen; Sturton, Susan
2003-11-01
Low-frequency volcanic earthquakes and tremor have been observed on seismic networks at a number of volcanoes, including Soufrière Hills volcano on Montserrat. Single events have well known characteristics, including a long duration (several seconds) and harmonic spectral peaks (0.2-5 Hz). They are commonly observed in swarms, and can be highly repetitive both in waveforms and amplitude spectra. As the time delay between them decreases, they merge into tremor, often preceding critical volcanic events like dome collapses or explosions. Observed amplitude spectrograms of long-period volcanic earthquake swarms may display gliding lines which reflect a time dependence in the frequency content. Using a magma-filled dyke embedded in a solid homogeneous half-space as a simplified volcanic structure, we employ a 2D finite-difference method to compute the propagation of seismic waves in the conduit and its vicinity. We successfully replicate the seismic wave field of a single low-frequency event, as well as the occurrence of events in swarms, their highly repetitive characteristics, and the time dependence of their spectral content. We use our model to demonstrate that there are two modes of conduit resonance, leading to two types of interface waves which are recorded at the free surface as surface waves. We also demonstrate that reflections from the top and the bottom of a conduit act as secondary sources that are recorded at the surface as repetitive low-frequency events with similar waveforms. We further expand our modelling to account for gradients in physical properties across the magma-solid interface. We also expand it to account for time dependence of magma properties, which we implement by changing physical properties within the conduit during numerical computation of wave propagation. We use our expanded model to investigate the amplitude and time scales required for modelling gliding lines, and show that changes in magma properties, particularly changes in the bubble nucleation level, provide a plausible mechanism for the frequency variation in amplitude spectrograms.
A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE
A Multimodel Approach for Calculating Benchmark Dose
Ramon I. Garcia and R. Woodrow Setzer
In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...
Theological misinterpretations of current physical cosmology
NASA Astrophysics Data System (ADS)
Grünbaum, Adolf
1996-04-01
In earlier writings, I argued that neither of the two major physical cosmologies of the 20th century support divine creation, so that atheism has nothing to fear from the explanations required by these cosmologies. Yet theists ranging from Augustine, Aquinas, Descartes, and Leibniz to Richard Swinburne and Philip Quinn have maintained that, at every instant anew, the existence of the world requires divine creation ex nihilo as its cause. Indeed, according to some such theists, for any given moment t. God's volition that the-world-should-exist-at-t supposedly brings about its actual existence at t. In an effort to reestablish the current viability of this doctrine of perpetual divine conservation. Philip Quinn argued (1993) that it is entirely compatible with physical energy conservation in the Big Bang cosmology, as well as with the physics of the steady-state theories. But I now contend that instead, there is a logical incompatibility on both counts. Besides, the stated tenet of divine conservation has an additional defect: It speciously purchases plausibility by trading on the multiply disanalogous volitional explanations of human actions.
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
County-level job automation risk and health: Evidence from the United States.
Patel, Pankaj C; Devaraj, Srikant; Hicks, Michael J; Wornell, Emily J
2018-04-01
Previous studies have observed a positive association between automation risk and employment loss. Based on the job insecurity-health risk hypothesis, greater exposure to automation risk could also be negatively associated with health outcomes. The main objective of this paper is to investigate the county-level association between prevalence of workers in jobs exposed to automation risk and general, physical, and mental health outcomes. As a preliminary assessment of the job insecurity-health risk hypothesis (automation risk → job insecurity → poorer health), a structural equation model was used based on individual-level data in the two cross-sectional waves (2012 and 2014) of General Social Survey (GSS). Next, using county-level data from County Health Rankings 2017, American Community Survey (ACS) 2015, and Statistics of US Businesses 2014, Two Stage Least Squares (2SLS) regression models were fitted to predict county-level health outcomes. Using the 2012 and 2014 waves of the GSS, employees in occupational classes at higher risk of automation reported more job insecurity, that, in turn, was associated with poorer health. The 2SLS estimates show that a 10% increase in automation risk at county-level is associated with 2.38, 0.8, and 0.6 percentage point lower general, physical, and mental health, respectively. Evidence suggests that exposure to automation risk may be negatively associated with health outcomes, plausibly through perceptions of poorer job security. More research is needed on interventions aimed at mitigating negative influence of automation risk on health. Copyright © 2018 Elsevier Ltd. All rights reserved.
Finite element modeling of melting and fluid flow in the laser-heated diamond-anvil cell
NASA Astrophysics Data System (ADS)
Gomez-Perez, N.; Rodriguez, J. F.; McWilliams, R. S.
2017-04-01
The laser-heated diamond anvil cell is widely used in the laboratory study of materials behavior at high-pressure and high-temperature, including melting curves and liquid properties at extreme conditions. Laser heating in the diamond cell has long been associated with fluid-like motion in samples, which is routinely used to determine melting points and is often described as convective in appearance. However, the flow behavior of this system is poorly understood. A quantitative treatment of melting and flow in the laser-heated diamond anvil cell is developed here to physically relate experimental motion to properties of interest, including melting points and viscosity. Numerical finite-element models are used to characterize the temperature distribution, melting, buoyancy, and resulting natural convection in samples. We find that continuous fluid motion in experiments can be explained most readily by natural convection. Fluid velocities, peaking near values of microns per second for plausible viscosities, are sufficiently fast to be detected experimentally, lending support to the use of convective motion as a criterion for melting. Convection depends on the physical properties of the melt and the sample geometry and is too sluggish to detect for viscosities significantly above that of water at ambient conditions, implying an upper bound on the melt viscosity of about 1 mPa s when convective motion is detected. A simple analytical relationship between melt viscosity and velocity suggests that direct viscosity measurements can be made from flow speeds, given the basic thermodynamic and geometric parameters of samples are known.
NASA Astrophysics Data System (ADS)
Jalalzadeh Fard, B.; Hassanzadeh, H.; Bhatia, U.; Ganguly, A. R.
2016-12-01
Studies on urban areas show a significant increase in frequency and intensity of heatwaves over the past decades, and predict the same trend for future. Since heatwaves have been responsible for a large number of life losses, urgent adaptation and mitigation strategies are required in the policy and decision making level for a sustainable urban planning. The Sustainability and Data Sciences Laboratory at Northeastern University, under the aegis of Thriving Earth Exchange of AGU, is working with the town of Brookline to understand the potential public health impacts of anticipated heatwaves. We consider the most important social and physical factors to obtain vulnerability and exposure parameters for each census block group of the town. Utilizing remote sensing data, we locate Urban Heat Islands (UHIs) during a recent heatwave event, as the hazard parameter. We then create priority risk map using the risk framework. Our analyses show spatial correlations between the UHIs and social factors such as poverty, and physical factors such as land cover variations. Furthermore, we investigate the future heatwave frequency and intensity increases by analyzing the climate models predictions. For future changes of UHIs, land cover changes are investigated using available predictive data. Also, socioeconomic predictions are carried out to complete the futuristic models of heatwave risks. Considering plausible scenarios for Brookline, we develop different risk maps based on the vulnerability, exposure and hazard parameters. Eventually, we suggest guidelines for Heatwave Action Plans for prioritizing effective mitigation and adaptation strategies in urban planning for the town of Brookline.
Racial/ethnic differences in midlife women's attitudes toward physical activity.
Im, Eun-Ok; Ko, Young; Hwang, Hyenam; Chee, Wonshik; Stuifbergen, Alexa; Walker, Lorraine; Brown, Adama
2013-01-01
Women's racial/ethnic-specific attitudes toward physical activity have been pointed out as a plausible reason for their low participation rates in physical activity. However, very little is actually known about racial/ethnic commonalities and differences in midlife women's attitudes toward physical activity. The purpose of this study was to explore commonalities and differences in midlife women's attitudes toward physical activity among 4 major racial/ethnic groups in the United States (whites, Hispanics, African Americans, and Asians). This was a secondary analysis of the qualitative data from a larger study that explored midlife women's attitudes toward physical activity. Qualitative data from 4 racial/ethnic-specific online forums among 90 midlife women were used for this study. The data were analyzed using thematic analysis, and themes reflecting commonalties and differences in the women's attitudes toward physical activity across the racial/ethnic groups were extracted. The themes reflecting the commonalities were: 1) physical activity is good for health, 2) not as active as I could be, 3) physical activity was not encouraged, 4) inherited diseases motivated participation in physical activity, and 5) lack of accessibility to physical activity. The themes reflecting the differences were: 1) physical activity as necessity or luxury, 2) organized versus natural physical activity, 3) individual versus family-oriented physical activity, and 4) beauty ideal or culturally accepted physical appearance. Developing an intervention that could change the social influences and environmental factors and address the women's racial/ethnic-specific attitudes would be a priority in increasing physical activity of racial/ethnic minority midlife women. © 2013 by the American College of Nurse-Midwives.
Yragui, Nanette L; Demsky, Caitlin A; Hammer, Leslie B; Van Dyck, Sarah; Neradilek, Moni B
2017-04-01
The present study examined the moderating effects of family-supportive supervisor behaviors (FSSB) on the relationship between two types of workplace aggression (i.e., patient-initiated physical aggression and coworker-initiated psychological aggression) and employee well-being and work outcomes. Data were obtained from a field sample of 417 healthcare workers in two psychiatric hospitals. Hypotheses were tested using moderated multiple regression analyses. Psychiatric care providers' perceptions of FSSB moderated the relationship between patient-initiated physical aggression and physical symptoms, exhaustion and cynicism. In addition, FSSB moderated the relationship between coworker-initiated psychological aggression and physical symptoms and turnover intentions. Based on our findings, family-supportive supervision is a plausible boundary condition for the relationship between workplace aggression and well-being and work outcomes. This study suggests that, in addition to directly addressing aggression prevention and reduction, family-supportive supervision is a trainable resource that healthcare organizations should facilitate to improve employee work and well-being in settings with high workplace aggression. This is the first study to examine the role of FSSB in influencing the relationship between two forms of workplace aggression: patient-initiated physical and coworker- initiated psychological aggression and employee outcomes.
NASA Astrophysics Data System (ADS)
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)
Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...
2015-09-01
Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less
Cockatiel-induced hypersensitivity pneumonitis.
McCluskey, James D; Haight, Robert R; Brooks, Stuart M
2002-07-01
Diagnosing an environmental or occupationally related pulmonary disorder often involves a process of elimination. Unlike commonly diagnosed conditions in other specialties, a cause-and-effect relationship may be implied, yet other factors such as temporality and biologic plausibility are lacking. Our patient was referred with a suspected work-related pulmonary disorder. For several years, she had suffered with dyspnea on exertion and repeated flulike illnesses. She worked at an automobile repair garage that performed a large number of emission tests, and there was concern that her workplace exposures were the cause of her symptoms. After a careful review of her history, physical examination, and laboratory testing, we came to the conclusion that she had hypersensitivity pneumonitis related to pet cockatiels in her home. Clinical points of emphasis include the importance of a complete environmental history and careful auscultation of the chest when performing the physical examination. In addition, we encountered an interesting physical diagnostic clue, a respiratory sound that assisted with the eventual diagnosis.
Cockatiel-induced hypersensitivity pneumonitis.
McCluskey, James D; Haight, Robert R; Brooks, Stuart M
2002-01-01
Diagnosing an environmental or occupationally related pulmonary disorder often involves a process of elimination. Unlike commonly diagnosed conditions in other specialties, a cause-and-effect relationship may be implied, yet other factors such as temporality and biologic plausibility are lacking. Our patient was referred with a suspected work-related pulmonary disorder. For several years, she had suffered with dyspnea on exertion and repeated flulike illnesses. She worked at an automobile repair garage that performed a large number of emission tests, and there was concern that her workplace exposures were the cause of her symptoms. After a careful review of her history, physical examination, and laboratory testing, we came to the conclusion that she had hypersensitivity pneumonitis related to pet cockatiels in her home. Clinical points of emphasis include the importance of a complete environmental history and careful auscultation of the chest when performing the physical examination. In addition, we encountered an interesting physical diagnostic clue, a respiratory sound that assisted with the eventual diagnosis. PMID:12117652
The metaphysics of quantum mechanics: Modal interpretations
NASA Astrophysics Data System (ADS)
Gluck, Stuart Murray
2004-11-01
This dissertation begins with the argument that a preferred way of doing metaphysics is through philosophy of physics. An understanding of quantum physics is vital to answering questions such as: What counts as an individual object in physical ontology? Is the universe fundamentally indeterministic? Are indiscernibles identical? This study explores how the various modal interpretations of quantum mechanics answer these sorts of questions; modal accounts are one of the two classes of interpretations along with so-called collapse accounts. This study suggests a new alternative within the class of modal views that yields a more plausible ontology, one in which the Principle of the Identity of Indisceribles is necessarily true. Next, it shows that modal interpretations can consistently deny that the universe must be fundamentally indeterministic so long as they accept certain other metaphysical commitments: either a perfect initial distribution of states in the universe or some form of primitive dispositional properties. Finally, the study sketches out a future research project for modal interpretations based on developing quantified quantum logic.
Xu, Kesheng; Maidana, Jean P.; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio
2017-01-01
In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons. PMID:28344550
On the distinguishability of HRF models in fMRI.
Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J
2015-01-01
Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.
Negotiating plausibility: intervening in the future of nanotechnology.
Selin, Cynthia
2011-12-01
The national-level scenarios project NanoFutures focuses on the social, political, economic, and ethical implications of nanotechnology, and is initiated by the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). The project involves novel methods for the development of plausible visions of nanotechnology-enabled futures, elucidates public preferences for various alternatives, and, using such preferences, helps refine future visions for research and outreach. In doing so, the NanoFutures project aims to address a central question: how to deliberate the social implications of an emergent technology whose outcomes are not known. The solution pursued by the NanoFutures project is twofold. First, NanoFutures limits speculation about the technology to plausible visions. This ambition introduces a host of concerns about the limits of prediction, the nature of plausibility, and how to establish plausibility. Second, it subjects these visions to democratic assessment by a range of stakeholders, thus raising methodological questions as to who are relevant stakeholders and how to activate different communities so as to engage the far future. This article makes the dilemmas posed by decisions about such methodological issues transparent and therefore articulates the role of plausibility in anticipatory governance.
DOT National Transportation Integrated Search
2001-06-30
Freight movements within large metropolitan areas are much less studied and analyzed than personal travel. This casts doubt on the results of much conventional travel demand modeling and planning. With so much traffic overlooked, how plausible are th...
Testing Adaptive Toolbox Models: A Bayesian Hierarchical Approach
ERIC Educational Resources Information Center
Scheibehenne, Benjamin; Rieskamp, Jorg; Wagenmakers, Eric-Jan
2013-01-01
Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox…
Metabolic Syndrome Risk Profiles Among African American Adolescents
Fitzpatrick, Stephanie L.; Lai, Betty S.; Brancati, Frederick L.; Golden, Sherita H.; Hill-Briggs, Felicia
2013-01-01
OBJECTIVE Although African American adolescents have the highest prevalence of obesity, they have the lowest prevalence of metabolic syndrome across all definitions used in previous research. To address this paradox, we sought to develop a model of the metabolic syndrome specific to African American adolescents. RESEARCH DESIGN AND METHODS Data from the National Health and Nutrition Examination Survey (2003–2010) of 822 nonpregnant, nondiabetic, African American adolescents (45% girls; aged 12 to 17 years) who underwent physical examinations and fasted at least 8 h were analyzed. We conducted a confirmatory factor analysis to model metabolic syndrome and then used latent profile analysis to identify metabolic syndrome risk groups among African American adolescents. We compared the risk groups on probability of prediabetes. RESULTS The best-fitting metabolic syndrome model consisted of waist circumference, fasting insulin, HDL, and systolic blood pressure. We identified three metabolic syndrome risk groups: low, moderate, and high risk (19% boys; 16% girls). Thirty-five percent of both boys and girls in the high-risk groups had prediabetes, a significantly higher prevalence compared with boys and girls in the low-risk groups. Among adolescents with BMI higher than the 85th percentile, 48 and 36% of boys and girls, respectively, were in the high-risk group. CONCLUSIONS Our findings provide a plausible model of the metabolic syndrome specific to African American adolescents. Based on this model, approximately 19 and 16% of African American boys and girls, respectively, are at high risk for having the metabolic syndrome. PMID:23093663
Stationary hydrodynamic models of Wolf-Rayet stars with optically thick winds.
NASA Astrophysics Data System (ADS)
Heger, A.; Langer, N.
1996-11-01
We investigate the influence of a grey, optically thick wind on the surface and internal structure of Wolf-Rayet (WR) stars. We calculate hydrodynamic models of chemically homogeneous helium stars with stationary outflows, solving the full set of stellar structure equations from the stellar center up to well beyond the sonic point of the wind, including the line force originating from absorption lines in a parameterized way. For specific assumptions about mass loss rate and wind opacity above our outer boundary, we find that the iron opacity peak may lead to local super-Eddington luminosities at the sonic point. By varying the stellar wind parameters over the whole physically plausible range, we show that the radius of the sonic point of the wind flow is always very close to the hydrostatic stellar radius obtained in WR star models which ignore the wind. However, our models confirm the possibility of large values for observable WR radii and correspondingly small effective temperatures found in earlier models. We show further that the energy which is contained in a typical WR wind can not be neglected. The stellar luminosity may be reduced by several 10%, which has a pronounced effect on the mass-luminosity relation, i. e., the WR masses derived for a given luminosity may be considerably larger. Thereby, also the momentum problem of WR winds is considerably reduced, as well as the scatter in the ˙(M) vs. M diagram for observed hydrogen-free WN stars.
Smith, Jessica D; Hou, Tao; Hu, Frank B; Rimm, Eric B; Spiegelman, Donna; Willett, Walter C; Mozaffarian, Dariush
2015-01-01
Background: The insidious pace of long-term weight gain (∼1 lb/y or 0.45 kg/y) makes it difficult to study in trials; long-term prospective cohorts provide crucial evidence on its key contributors. Most previous studies have evaluated how prevalent lifestyle habits relate to future weight gain rather than to lifestyle changes, which may be more temporally and physiologically relevant. Objective: Our objective was to evaluate and compare different methodological approaches for investigating diet, physical activity (PA), and long-term weight gain. Methods: In 3 prospective cohorts (total n = 117,992), we assessed how lifestyle relates to long-term weight change (up to 24 y of follow-up) in 4-y periods by comparing 3 analytic approaches: 1) prevalent diet and PA and 4-y weight change (prevalent analysis); 2) 4-y changes in diet and PA with a 4-y weight change (change analysis); and 3) 4-y change in diet and PA with weight change in the subsequent 4 y (lagged-change analysis). We compared these approaches and evaluated the consistency across cohorts, magnitudes of associations, and biological plausibility of findings. Results: Across the 3 methods, consistent, robust, and biologically plausible associations were seen only for the change analysis. Results for prevalent or lagged-change analyses were less consistent across cohorts, smaller in magnitude, and biologically implausible. For example, for each serving of a sugar-sweetened beverage, the observed weight gain was 0.01 lb (95% CI: −0.08, 0.10) [0.005 kg (95% CI: −0.04, 0.05)] based on prevalent analysis; 0.99 lb (95% CI: 0.83, 1.16) [0.45 kg (95% CI: 0.38, 0.53)] based on change analysis; and 0.05 lb (95% CI: −0.10, 0.21) [0.02 kg (95% CI: −0.05, 0.10)] based on lagged-change analysis. Findings were similar for other foods and PA. Conclusions: Robust, consistent, and biologically plausible relations between lifestyle and long-term weight gain are seen when evaluating lifestyle changes and weight changes in discrete periods rather than in prevalent lifestyle or lagged changes. These findings inform the optimal methods for evaluating lifestyle and long-term weight gain and the potential for bias when other methods are used. PMID:26377763
Smith, Jessica D; Hou, Tao; Hu, Frank B; Rimm, Eric B; Spiegelman, Donna; Willett, Walter C; Mozaffarian, Dariush
2015-11-01
The insidious pace of long-term weight gain (∼ 1 lb/y or 0.45 kg/y) makes it difficult to study in trials; long-term prospective cohorts provide crucial evidence on its key contributors. Most previous studies have evaluated how prevalent lifestyle habits relate to future weight gain rather than to lifestyle changes, which may be more temporally and physiologically relevant. Our objective was to evaluate and compare different methodological approaches for investigating diet, physical activity (PA), and long-term weight gain. In 3 prospective cohorts (total n = 117,992), we assessed how lifestyle relates to long-term weight change (up to 24 y of follow-up) in 4-y periods by comparing 3 analytic approaches: 1) prevalent diet and PA and 4-y weight change (prevalent analysis); 2) 4-y changes in diet and PA with a 4-y weight change (change analysis); and 3) 4-y change in diet and PA with weight change in the subsequent 4 y (lagged-change analysis). We compared these approaches and evaluated the consistency across cohorts, magnitudes of associations, and biological plausibility of findings. Across the 3 methods, consistent, robust, and biologically plausible associations were seen only for the change analysis. Results for prevalent or lagged-change analyses were less consistent across cohorts, smaller in magnitude, and biologically implausible. For example, for each serving of a sugar-sweetened beverage, the observed weight gain was 0.01 lb (95% CI: -0.08, 0.10) [0.005 kg (95% CI: -0.04, 0.05)] based on prevalent analysis; 0.99 lb (95% CI: 0.83, 1.16) [0.45 kg (95% CI: 0.38, 0.53)] based on change analysis; and 0.05 lb (95% CI: -0.10, 0.21) [0.02 kg (95% CI: -0.05, 0.10)] based on lagged-change analysis. Findings were similar for other foods and PA. Robust, consistent, and biologically plausible relations between lifestyle and long-term weight gain are seen when evaluating lifestyle changes and weight changes in discrete periods rather than in prevalent lifestyle or lagged changes. These findings inform the optimal methods for evaluating lifestyle and long-term weight gain and the potential for bias when other methods are used. © 2015 American Society for Nutrition.
Experimental verification of free-space singular boundary conditions in an invisibility cloak
NASA Astrophysics Data System (ADS)
Wu, Qiannan; Gao, Fei; Song, Zhengyong; Lin, Xiao; Zhang, Youming; Chen, Huanyang; Zhang, Baile
2016-04-01
A major issue in invisibility cloaking, which caused intense mathematical discussions in the past few years but still remains physically elusive, is the plausible singular boundary conditions associated with the singular metamaterials at the inner boundary of an invisibility cloak. The perfect cloaking phenomenon, as originally proposed by Pendry et al for electromagnetic waves, cannot be treated as physical before a realistic inner boundary of a cloak is demonstrated. Although a recent demonstration has been done in a waveguide environment, the exotic singular boundary conditions should apply to a general environment as in free space. Here we fabricate a metamaterial surface that exhibits the singular boundary conditions and demonstrate its performance in free space. Particularly, the phase information of waves reflected from this metamaterial surface is explicitly measured, confirming the singular responses of boundary conditions for an invisibility cloak.
Multi-scale kinetic description of granular clusters: invariance, balance, and temperature
NASA Astrophysics Data System (ADS)
Capriz, Gianfranco; Mariano, Paolo Maria
2017-12-01
We discuss a multi-scale continuum representation of bodies made of several mass particles flowing independently each other. From an invariance procedure and a nonstandard balance of inertial actions, we derive the balance equations introduced in earlier work directly in pointwise form, essentially on the basis of physical plausibility. In this way, we analyze their foundations. Then, we propose a Boltzmann-type equation for the distribution of kinetic energies within control volumes in space and indicate how such a distribution allows us to propose a definition of (granular) temperature along processes far from equilibrium.
Pasanen, Tytti P; Tyrväinen, Liisa; Korpela, Kalevi M
2014-11-01
A body of evidence shows that both physical activity and exposure to nature are connected to improved general and mental health. Experimental studies have consistently found short term positive effects of physical activity in nature compared with built environments. This study explores whether these benefits are also evident in everyday life, perceived over repeated contact with nature. The topic is important from the perspectives of city planning, individual well-being, and public health. National survey data (n = 2,070) from Finland was analysed using structural regression analyses. Perceived general health, emotional well-being, and sleep quality were regressed on the weekly frequency of physical activity indoors, outdoors in built environments, and in nature. Socioeconomic factors and other plausible confounders were controlled for. Emotional well-being showed the most consistent positive connection to physical activity in nature, whereas general health was positively associated with physical activity in both built and natural outdoor settings. Better sleep quality was weakly connected to frequent physical activity in nature, but the connection was outweighed by other factors. The results indicate that nature provides an added value to the known benefits of physical activity. Repeated exercise in nature is, in particular, connected to better emotional well-being. © 2014 The Authors. Applied Psychology: Health and Well-Being published by John Wiley & Sons Ltd on behalf of The International Association of Applied Psychology.
Qamar, A; LeBlanc, K; Semeniuk, O; Reznik, A; Lin, J; Pan, Y; Moewes, A
2017-10-13
We investigated the electronic structure of Lead Oxide (PbO) - one of the most promising photoconductor materials for direct conversion x-ray imaging detectors, using soft x-ray emission and absorption spectroscopy. Two structural configurations of thin PbO layers, namely the polycrystalline and the amorphous phase, were studied, and compared to the properties of powdered α-PbO and β-PbO samples. In addition, we performed calculations within the framework of density functional theory and found an excellent agreement between the calculated and the measured absorption and emission spectra, which indicates high accuracy of our structural models. Our work provides strong evidence that the electronic structure of PbO layers, specifically the width of the band gap and the presence of additional interband and intraband states in both conduction and valence band, depend on the deposition conditions. We tested several model structures using DFT simulations to understand what the origin of these states is. The presence of O vacancies is the most plausible explanation for these additional electronic states. Several other plausible models were ruled out including interstitial O, dislocated O and the presence of significant lattice stress in PbO.
NASA Astrophysics Data System (ADS)
Kurosawa, K.; Uchiyama, Y.
2016-12-01
By optimally combined ocean models with observation data, numerical oceanic reanalysis and forecast systems allow us to predict the ocean more precisely. In general, data assimilation is exploited to prepare the initial condition for the forecast. This technique has widely been employed in atmospheric prediction, whereas oceanic prediction lags behind weather forecast. Accurate oceanic prediction systems have been demanded for operational purposes such as for fisheries, vessel navigation, marine construction, offshore platform management, marine monitoring, etc. In particular, in crowded harbors and estuaries including the Seto Inland Sea (SIS), Japan, data assimilation has seldom been adapted because data from satellites and Argo floats essential to successful oceanic predictions is desperately limited. In addition, although static data assimilation, typically three-dimensional variational data assimilation (3DVAR), is computationally cheap and statistically optimal, but is not physically balanced. For instance, 3DVAR is known to modify velocity and density fields merely mathematically, yet it does not adequately consider quasi-geostrophic balance, which is generally true in most cases. In the present study, we develop a 3DVAR system for Regional Oceanic Modeling Systems (ROMS) and apply to the high-resolution SIS model in a double nested configuration (Kosako et al., 2015). The SIS is the largest estuary in Japan with a number of autonomous in-situ monitoring of vertical profiles of temperature and salinity, tens of tidal gages, along with continuous surface current measurement using HF radars. We first present a theoretical framework of the 3DVAR algorithm by considering geostrophic and thermal-wind balance to find plausible relationships among physical variables to avoid undesirable modifications. Subsequently, the developed 3DVAR is coupled with the SIS ROMS model to compare the model outcomes against some observation data. The 3DVAR ROMS model for the SIS performs much better than the SIS model without assimilation and demonstrates good model skills with reproducing quite complex flows in the SIS because of its complicated topography with more than 3,000 islands in there. Furthermore we will share technical difficulties encountered during the experiment.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G
2010-01-01
Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.
Cardiovascular reactivity, stress, and physical activity
Huang, Chun-Jung; Webb, Heather E.; Zourdos, Michael C.; Acevedo, Edmund O.
2013-01-01
Psychological stress has been proposed as a major contributor to the progression of cardiovascular disease (CVD). Acute mental stress can activate the sympathetic-adrenal-medullary (SAM) axis, eliciting the release of catecholamines (NE and EPI) resulting in the elevation of heart rate (HR) and blood pressure (BP). Combined stress (psychological and physical) can exacerbate these cardiovascular responses, which may partially contribute to the elevated risk of CVD and increased proportionate mortality risks experienced by some occupations (e.g., firefighting and law enforcement). Studies have supported the benefits of physical activity on physiological and psychological health, including the cardiovascular response to acute stress. Aerobically trained individuals exhibit lower sympathetic nervous system (e.g., HR) reactivity and enhanced cardiovascular efficiency (e.g., lower vascular reactivity and decreased recovery time) in response to physical and/or psychological stress. In addition, resistance training has been demonstrated to attenuate cardiovascular responses and improve mental health. This review will examine stress-induced cardiovascular reactivity and plausible explanations for how exercise training and physical fitness (aerobic and resistance exercise) can attenuate cardiovascular responses to stress. This enhanced functionality may facilitate a reduction in the incidence of stroke and myocardial infarction. Finally, this review will also address the interaction of obesity and physical activity on cardiovascular reactivity and CVD. PMID:24223557
ERIC Educational Resources Information Center
Staub, Adrian; Rayner, Keith; Pollatsek, Alexander; Hyona, Jukka; Majewski, Helen
2007-01-01
Readers' eye movements were monitored as they read sentences containing noun-noun compounds that varied in frequency (e.g., elevator mechanic, mountain lion). The left constituent of the compound was either plausible or implausible as a head noun at the point at which it appeared, whereas the compound as a whole was always plausible. When the head…
Modulation of channel activity and gadolinium block of MscL by static magnetic fields.
Petrov, Evgeny; Martinac, Boris
2007-02-01
The magnetic field of the Earth has for long been known to influence the behaviour and orientation of a variety of living organisms. Experimental studies of the magnetic sense have, however, been impaired by the lack of a plausible cellular and/or molecular mechanism providing meaningful explanation for detection of magnetic fields by these organisms. Recently, mechanosensitive (MS) ion channels have been implied to play a role in magnetoreception. In this study we have investigated the effect of static magnetic fields (SMFs) of moderate intensity on the activity and gadolinium block of MscL, the bacterial MS channel of large conductance, which has served as a model channel to study the basic physical principles of mechanosensory transduction in living cells. In addition to showing that direct application of the magnetic field decreased the activity of the MscL channel, our study demonstrates for the first time that SMFs can reverse the effect of gadolinium, a well-known blocker of MS channels. The results of our study are consistent with a notion that (1) the effects of SMFs on the MscL channels may result from changes in physical properties of the lipid bilayer due to diamagnetic anisotropy of phospholipid molecules and consequently (2) cooperative superdiamagnetism of phospholipid molecules under influence of SMFs could cause displacement of Gd(3+) ions from the membrane bilayer and thus remove the MscL channel block.
Babu, Giridhara R.; Sudhir, Paulomi M.; Mahapatra, Tanmay; Das, Aritra; Rathnaiah, Mohanbabu; Anand, Indiresh; Detels, Roger
2016-01-01
Background: There is limited scientific evidence on the relationship of job stress with quality of life (QoL). Purpose: This study aims to explore different domains of job stress affecting IT/ITES professionals and estimate the levels of stress that these professionals endure to reach positive levels of QoL given that other determinants operating between these two variables are accounted for. Materials and Methods: We estimated levels of stress that software professionals would have endured to reach positive levels of QoL considering that other factors operating between these two variables are accounted for. The study participants comprised 1071 software professionals who were recruited using a mixed sampling method. Participants answered a self-administered questionnaire containing questions on job stress, QoL, and confounders. Results: All the domains (physical, psychological, social, and environmental) of QoL showed statistically significant positive associations with increasing stress domains of autonomy, physical infrastructure, work environment, and emotional factors. Conclusions: The respondents clearly found the trade-off of higher stress to be acceptable for the improved QoL they enjoyed. It is also possible that stress might actually be responsible for improvements in QoL either directly or through mediation of variables such as personal values and aspirations. Yerkes-Dodson law and stress appraisal models of Folkman and Lazarus may explain the plausible positive association. PMID:28194085
Concussions and the military: issues specific to service members.
Rigg, John L; Mooney, Scott R
2011-10-01
Since October 2001, more than 1.6 million American military service members have deployed to Iraq and Afghanistan in the Global War on Terrorism. It is estimated that between 5% and 35% of them have sustained a concussion, also called mild traumatic brain injury (mTBI), during their deployment. Up to 80% of the concussions experienced in theater are secondary to blast exposures. The unique circumstances and consequences of sustaining a concussion in combat demands a unique understanding and treatment plan. The current literature was reviewed and revealed a paucity of pathophysiological explanations on the nature of the injury and informed treatment plans. However, through observation and experience, a theoretical but scientifically plausible model for why and how blast injuries experienced in combat give rise to the symptoms that affect day-to-day function of service members who have been concussed has been developed. We also are able to offer treatment strategies based on our evaluation of the current literature and experience to help palliate postconcussive symptoms. The purpose of this review is to elucidate common physical, cognitive, emotional, and situational challenges, and possible solutions for this special population of patients who will be transitioning into the civilian sector and interfacing with health professionals. There is a need for further investigation and testing of these strategies. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
Fast ionized X-ray absorbers in AGNs
NASA Astrophysics Data System (ADS)
Fukumura, K.; Tombesi, F.; Kazanas, D.; Shrader, C.; Behar, E.; Contopoulos, I.
2016-05-01
We investigate the physics of the X-ray ionized absorbers often identified as warm absorbers (WAs) and ultra-fast outflows (UFOs) in Seyfert AGNs from spectroscopic studies in the context of magnetically-driven accretion-disk wind scenario. Launched and accelerated by the action of a global magnetic field anchored to an underlying accretion disk around a black hole, outflowing plasma is irradiated and ionized by an AGN radiation field characterized by its spectral energy density (SED). By numerically solving the Grad-Shafranov equation in the magnetohydrodynamic (MHD) framework, the physical property of the magnetized disk-wind is determined by a wind parameter set, which is then incorporated into radiative transfer calculations with xstar photoionization code under heating-cooling equilibrium state to compute the absorber's properties such as column density N_H, line-of-sight (LoS) velocity v, ionization parameter ξ, among others. Assuming that the wind density scales as n ∝ r-1, we calculate theoretical absorption measure distribution (AMD) for various ions seen in AGNs as well as line spectra especially for the Fe Kα absorption feature by focusing on a bright quasar PG 1211+143 as a case study and show the model's plausibility. In this note we demonstrate that the proposed MHD-driven disk-wind scenario is not only consistent with the observed X-ray data, but also help better constrain the underlying nature of the AGN environment in a close proximity to a central engine.
Malm, Christer; Nyberg, Pernilla; Engström, Marianne; Sjödin, Bertil; Lenkei, Rodica; Ekblom, Björn; Lundberg, Ingrid
2000-01-01
A role of the immune system in muscular adaptation to physical exercise has been suggested but data from controlled human studies are scarce. The present study investigated immunological events in human blood and skeletal muscle by immunohistochemistry and flow cytometry after eccentric cycling exercise and multiple biopsies. Immunohistochemical detection of neutrophil- (CD11b, CD15), macrophage- (CD163), satellite cell- (CD56) and IL-1β-specific antigens increased similarly in human skeletal muscle after eccentric cycling exercise together with multiple muscle biopsies, or multiple biopsies only. Changes in immunological variables in blood and muscle were related, and monocytes and natural killer (NK) cells appeared to have governing functions over immunological events in human skeletal muscle. Delayed onset muscle soreness, serum creatine kinase activity and C-reactive protein concentration were not related to leukocyte infiltration in human skeletal muscle. Eccentric cycling and/or muscle biopsies did not result in T cell infiltration in human skeletal muscle. Modes of stress other than eccentric cycling should therefore be evaluated as a myositis model in human. Based on results from the present study, and in the light of previously published data, it appears plausible that muscular adaptation to physical exercise occurs without preceding muscle inflammation. Nevertheless, leukocytes seem important for repair, regeneration and adaptation of human skeletal muscle. PMID:11080266
NASA Astrophysics Data System (ADS)
Cheng, Yanyan; Ogden, Fred L.; Zhu, Jianting
2017-07-01
Preferential flow paths (PFPs) affect the hydrological response of humid tropical catchments but have not received sufficient attention. We consider PFPs created by tree roots and earthworms in a near-surface soil layer in steep, humid, tropical lowland catchments and hypothesize that observed hydrological behaviors can be better captured by reasonably considering PFPs in this layer. We test this hypothesis by evaluating the performance of four different physically based distributed model structures without and with PFPs in different configurations. Model structures are tested both quantitatively and qualitatively using hydrological, geophysical, and geochemical data both from the Smithsonian Tropical Research Institute Agua Salud Project experimental catchment(s) in Central Panama and other sources in the literature. The performance of different model structures is evaluated using runoff Volume Error and three Nash-Sutcliffe efficiency measures against observed total runoff, stormflows, and base flows along with visual comparison of simulated and observed hydrographs. Two of the four proposed model structures which include both lateral and vertical PFPs are plausible, but the one with explicit simulation of PFPs performs the best. A small number of vertical PFPs that fully extend below the root zone allow the model to reasonably simulate deep groundwater recharge, which plays a crucial role in base flow generation. Results also show that the shallow lateral PFPs are the main contributor to the observed high flow characteristics. Their number and size distribution are found to be more important than the depth distribution. Our model results are corroborated by geochemical and geophysical observations.
NASA Astrophysics Data System (ADS)
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task, a 20-yr high-resolution control simulation is carried out. The extreme events belong to this distribution, and can be mapped onto the distribution of precipitation obtained from a gridded product of precipitation provided by MeteoSwiss. This procedure yields bias-free extreme precipitation events which serve as input by hydrological models that eventually produce a simulated, yet physically consistent flooding event. Thereby, the proposed methodology guarantees consistency with the underlying physics of extreme events, and reproduces plausible impacts of up to one-in-five-centuries situations.
Park, Jong Suk; Kang, Ung Gu
2016-02-01
Traditionally, delusions have been considered to be the products of misinterpretation and irrationality. However, some theorists have argued that delusions are normal or rational cognitive responses to abnormal experiences. That is, when a recently experienced peculiar event is more plausibly explained by an extraordinary hypothesis, confidence in the veracity of this extraordinary explanation is reinforced. As the number of such experiences, driven by the primary disease process in the perceptual domain, increases, this confidence builds and solidifies, forming a delusion. We tried to understand the formation of delusions using a simulation based on Bayesian inference. We found that (1) even if a delusional explanation is only marginally more plausible than a non-delusional one, the repetition of the same experience results in a firm belief in the delusion. (2) The same process explains the systematization of delusions. (3) If the perceived plausibility of the explanation is not consistent but varies over time, the development of a delusion is delayed. Additionally, this model may explain why delusions are not corrected by persuasion or rational explanation. This Bayesian inference perspective can be considered a way to understand delusions in terms of rational human heuristics. However, such experiences of "rationality" can lead to irrational conclusions, depending on the characteristics of the subject. Copyright © 2015 Elsevier Ltd. All rights reserved.
Surface Magnetic Field Strengths: New Tests of Magnetoconvective Models of M Dwarfs
NASA Astrophysics Data System (ADS)
MacDonald, James; Mullan, D. J.
2014-05-01
Precision modeling of M dwarfs has become worthwhile in recent years due to the increasingly precise values of masses and radii which can be obtained from eclipsing binary studies. In a recent paper, Torres has identified four prime M dwarf pairs with the most precise empirical determinations of masses and radii. The measured radii are consistently larger than standard stellar models predict by several percent. These four systems potentially provide the most challenging tests of precision evolutionary models of cool dwarfs at the present time. We have previously modeled M dwarfs in the context of a criterion due to Gough & Tayler in which magnetic fields inhibit the onset of convection according to a physics-based prescription. In the present paper, we apply our magnetoconvective approach to the four prime systems in the Torres list. Going a step beyond what we have already modeled in CM Dra (one of the four Torres systems), we note that new constraints on magnetoconvective models of M dwarfs are now available from empirical estimates of magnetic field strengths on the surfaces of these stars. In the present paper, we consider how well our magnetoconvective models succeed when confronted with this new test of surface magnetic field strengths. Among the systems listed by Torres, we find that plausible magnetic models work well for CM Dra, YY Gem, and CU Cnc. (The fourth system in Torres's list does not yet have enough information to warrant magnetic modeling.) Our magnetoconvection models of CM Dra, YY Gem, and CU Cnc yield predictions of the magnetic fluxes on the stellar surface which are consistent with the observed correlation between magnetic flux and X-ray luminosity.
Surface magnetic field strengths: New tests of magnetoconvective models of M dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDonald, James; Mullan, D. J., E-mail: jimmacd@udel.edu, E-mail: mullan@udel.edu
2014-05-20
Precision modeling of M dwarfs has become worthwhile in recent years due to the increasingly precise values of masses and radii which can be obtained from eclipsing binary studies. In a recent paper, Torres has identified four prime M dwarf pairs with the most precise empirical determinations of masses and radii. The measured radii are consistently larger than standard stellar models predict by several percent. These four systems potentially provide the most challenging tests of precision evolutionary models of cool dwarfs at the present time. We have previously modeled M dwarfs in the context of a criterion due to Goughmore » and Tayler in which magnetic fields inhibit the onset of convection according to a physics-based prescription. In the present paper, we apply our magnetoconvective approach to the four prime systems in the Torres list. Going a step beyond what we have already modeled in CM Dra (one of the four Torres systems), we note that new constraints on magnetoconvective models of M dwarfs are now available from empirical estimates of magnetic field strengths on the surfaces of these stars. In the present paper, we consider how well our magnetoconvective models succeed when confronted with this new test of surface magnetic field strengths. Among the systems listed by Torres, we find that plausible magnetic models work well for CM Dra, YY Gem, and CU Cnc. (The fourth system in Torres's list does not yet have enough information to warrant magnetic modeling.) Our magnetoconvection models of CM Dra, YY Gem, and CU Cnc yield predictions of the magnetic fluxes on the stellar surface which are consistent with the observed correlation between magnetic flux and X-ray luminosity.« less
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
Effects of plausibility on structural priming.
Christianson, Kiel; Luke, Steven G; Ferreira, Fernanda
2010-03-01
We report a replication and extension of Ferreira (2003), in which it was observed that native adult English speakers misinterpret passive sentences that relate implausible but not impossible semantic relationships (e.g., The angler was caught by the fish) significantly more often than they do plausible passives or plausible or implausible active sentences. In the experiment reported here, participants listened to the same plausible and implausible passive and active sentences as in Ferreira (2003), answered comprehension questions, and then orally described line drawings of simple transitive actions. The descriptions were analyzed as a measure of structural priming (Bock, 1986). Question accuracy data replicated Ferreira (2003). Production data yielded an interaction: Passive descriptions were produced more often after plausible passives and implausible actives. We interpret these results as indicative of a language processor that proceeds along differentiated morphosyntactic and semantic routes. The processor may end up adjudicating between conflicting outputs from these routes by settling on a "good enough" representation that is not completely faithful to the input.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
Flow in the Deep Mantle from Seisimc Anisotropy: Progress and Prospects
NASA Astrophysics Data System (ADS)
Long, M. D.
2017-12-01
Observations of seismic anisotropy, or the directional dependence of seismic wavespeeds, provide one some of the most direct constraints on the pattern of flow in the Earth's mantle. In particular, as our understanding of crystallographic preferred orientation (CPO) of olivine aggregates under a range of deformation conditions has improved, our ability to exploit observations of upper mantle anisotropy has led to fundamental discoveries about the patterns of flow in the upper mantle and the drivers of that flow. It has been a challenge, however, to develop a similar framework for understanding flow in the deep mantle (transition zone, uppermost lower mantle, and lowermost mantle), even though there is convincing observational evidence for seismic anisotropy at these depths. Recent progress on the observational front has allowed for an increasingly detailed view of mid-mantle anisotropy (transition zone and uppermost lower mantle), particularly in subduction systems, which may eventually lead to a better understanding of mid-mantle deformation and the dynamics of slab interaction with the surrounding mid-mantle. New approaches to the observation and modeling of lowermost mantle anisotropy, in combination with constraints from mineral physics, are progressing towards interpretive frameworks that allow for the discrimination of different mantle flow geometries in different regions of D". In particular, observational strategies that involve the use of multiple types of body wave phases sampled over a range of propagation azimuths enable detailed forward modeling approaches that can discriminate between different mechanisms for D" anisotropy (e.g., CPO of post-perovskite, bridgmanite, or ferropericlase, or shape preferred orientation of partial melt) and identify plausible anisotropic orientations. We have recently begun to move towards a full waveform modeling approach in this work, which allows for a more accurate simulation for seismic wave propagation. Ongoing improvements in seismic observational strategies, experimental and computational mineral physics, and geodynamic modeling approaches are leading to new avenues for understanding flow in the deep mantle through the study of seismic anisotropy.
ZFIRE: using Hα equivalent widths to investigate the in situ initial mass function at z ˜ 2
NASA Astrophysics Data System (ADS)
Nanayakkara, Themiya; Glazebrook, Karl; Kacprzak, Glenn G.; Yuan, Tiantian; Fisher, David; Tran, Kim-Vy; Kewley, Lisa J.; Spitler, Lee; Alcorn, Leo; Cowley, Michael; Labbe, Ivo; Straatman, Caroline; Tomczak, Adam
2017-07-01
We use the ZFIRE (http://zfire.swinburne.edu.au) survey to investigate the high-mass slope of the initial mass function (IMF) for a mass-complete (log_{10({M}_*/M_{⊙})˜ 9.3}) sample of 102 star-forming galaxies at z ˜ 2 using their Hα equivalent widths (Hα EWs) and rest-frame optical colours. We compare dust-corrected Hα EW distributions with predictions of star formation histories (SFHs) from pegase.2 and starburst synthetic stellar population models. We find an excess of high Hα EW galaxies that are up to 0.3-0.5 dex above the model-predicted Salpeter IMF locus and the Hα EW distribution is much broader (10-500 Å) than can easily be explained by a simple monotonic SFH with a standard Salpeter-slope IMF. Though this discrepancy is somewhat alleviated when it is assumed that there is no relative attenuation difference between stars and nebular lines, the result is robust against observational biases, and no single IMF (I.e. non-Salpeter slope) can reproduce the data. We show using both spectral stacking and Monte Carlo simulations that starbursts cannot explain the EW distribution. We investigate other physical mechanisms including models with variations in stellar rotation, binary star evolution, metallicity and the IMF upper-mass cut-off. IMF variations and/or highly rotating extreme metal-poor stars (Z ˜ 0.1 Z⊙) with binary interactions are the most plausible explanations for our data. If the IMF varies, then the highest Hα EWs would require very shallow slopes (Γ > -1.0) with no one slope able to reproduce the data. Thus, the IMF would have to vary stochastically. We conclude that the stellar populations at z ≳ 2 show distinct differences from local populations and there is no simple physical model to explain the large variation in Hα EWs at z ˜ 2.
NASA Technical Reports Server (NTRS)
Suess, Steven T.; Wang, A. H.; Wu, Shi T.; Nerney, S.
1998-01-01
Evaporation is the consequence of slow plasma heating near the tops of streamers where the plasma is only weakly contained by the magnetic field. The form it takes is the slow opening of field lines at the top of the streamer and transient formation of new solar wind. It was discovered in polytropic model calculations, where due to the absence of other energy loss mechanisms in magnetostatic streamers, its ultimate endpoint is the complete evaporation of the streamer. This takes, for plausible heating rates, weeks to months in these models. Of course streamers do not behave this way, for more than one reason. One is that there are losses due to thermal conduction to the base of the streamer and radiation from the transition region. Another is that streamer heating must have a characteristic time constant and depend on the ambient physical conditions. We use our global Magnetohydrodynamics (MHD) model with thermal conduction to examine a few examples of the effect of changing the heating scale height and of making ad hoc choices for how the heating depends on ambient conditions. At the same time, we apply and extend the analytic model of streamers, which showed that streamers will be unable to contain plasma for temperatures near the cusp greater than about 2xl0(exp 6) K. Slow solar wind is observed to come from streamers through transient releases. A scenario for this that is consistent with the above physical process is that heating increases the near-cusp temperature until field lines there are forced open. The subsequent evacuation of the flux tubes by the newly forming slow wind decreases the temperature and heating until the flux tubes are able to reclose. Then, over a longer time scale, heating begins to again refill the flux tubes with plasma and increase the temperature until the cycle repeats itself. The calculations we report here are first steps towards quantitative evaluation of this scenario.
Structural organization of G-protein-coupled receptors
NASA Astrophysics Data System (ADS)
Lomize, Andrei L.; Pogozheva, Irina D.; Mosberg, Henry I.
1999-07-01
Atomic-resolution structures of the transmembrane 7-α-helical domains of 26 G-protein-coupled receptors (GPCRs) (including opsins, cationic amine, melatonin, purine, chemokine, opioid, and glycoprotein hormone receptors and two related proteins, retinochrome and Duffy erythrocyte antigen) were calculated by distance geometry using interhelical hydrogen bonds formed by various proteins from the family and collectively applied as distance constraints, as described previously [Pogozheva et al., Biophys. J., 70 (1997) 1963]. The main structural features of the calculated GPCR models are described and illustrated by examples. Some of the features reflect physical interactions that are responsible for the structural stability of the transmembrane α-bundle: the formation of extensive networks of interhelical H-bonds and sulfur-aromatic clusters that are spatially organized as 'polarity gradients' the close packing of side-chains throughout the transmembrane domain; and the formation of interhelical disulfide bonds in some receptors and a plausible Zn2+ binding center in retinochrome. Other features of the models are related to biological function and evolution of GPCRs: the formation of a common 'minicore' of 43 evolutionarily conserved residues; a multitude of correlated replacements throughout the transmembrane domain; an Na+-binding site in some receptors, and excellent complementarity of receptor binding pockets to many structurally dissimilar, conformationally constrained ligands, such as retinal, cyclic opioid peptides, and cationic amine ligands. The calculated models are in good agreement with numerous experimental data.
Simulated Martian pressure cycle based on the sublimation and deposition of polar CO2
NASA Astrophysics Data System (ADS)
Kemppinen, Osku; Paton, Mark; Savijärvi, Hannu; Harri, Ari-Matti
2014-05-01
The Martian atmospheric pressure cycle is driven by sublimation and deposition of CO2 at polar caps. In the thin atmosphere of Mars the surface energy balance and thus the phase changes of CO2 are dominated by radiation. Additionally, because the atmosphere is so thin, the annual polar cap cycle can have a large relative effect on the pressure. In this work we utilize radiative transfer models to calculate the amount of radiation incoming to Martian polar latitudes over each sol of the year, as well as the amount of energy lost from the surface due to thermal radiation. The energy budget calculated in this way allows us to estimate the amount of CO2 sublimating and depositing at each hour of the Martian year. Since virtually all of the sublimated CO2 is believed to enter and stay in the atmosphere until depositing, this estimate allows us to calculate the annual pressure cycle, assuming that the CO2 is distributed approximately evenly over the planet. The model is running with physically plausible parameters and producing encouragingly good fits to in situ measured data made by e.g. Viking landers. In the next phase we will validate the simulation runs against polar ice cap thickness measurements as well as compare the calculated CO2 source and sink strengths to the sources and sinks of global atmospheric models.
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
The Big Bang and Cosmic Inflation
NASA Astrophysics Data System (ADS)
Guth, Alan H.
2014-03-01
A summary is given of the key developments of cosmology in the 20th century, from the work of Albert Einstein to the emergence of the generally accepted hot big bang model. The successes of this model are reviewed, but emphasis is placed on the questions that the model leaves unanswered. The remainder of the paper describes the inflationary universe model, which provides plausible answers to a number of these questions. It also offers a possible explanation for the origin of essentially all the matter and energy in the observed universe.
The Plausibility of a String Quartet Performance in Virtual Reality.
Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel
2017-04-01
We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.
What if? Neural activity underlying semantic and episodic counterfactual thinking.
Parikh, Natasha; Ruzic, Luka; Stewart, Gregory W; Spreng, R Nathan; De Brigard, Felipe
2018-05-25
Counterfactual thinking (CFT) is the process of mentally simulating alternative versions of known facts. In the past decade, cognitive neuroscientists have begun to uncover the neural underpinnings of CFT, particularly episodic CFT (eCFT), which activates regions in the default network (DN) also activated by episodic memory (eM) recall. However, the engagement of DN regions is different for distinct kinds of eCFT. More plausible counterfactuals and counterfactuals about oneself show stronger activity in DN regions compared to implausible and other- or object-focused counterfactuals. The current study sought to identify a source for this difference in DN activity. Specifically, self-focused counterfactuals may also be more plausible, suggesting that DN core regions are sensitive to the plausibility of a simulation. On the other hand, plausible and self-focused counterfactuals may involve more episodic information than implausible and other-focused counterfactuals, which would imply DN sensitivity to episodic information. In the current study, we compared episodic and semantic counterfactuals generated to be plausible or implausible against episodic and semantic memory reactivation using fMRI. Taking multivariate and univariate approaches, we found that the DN is engaged more during episodic simulations, including eM and all eCFT, than during semantic simulations. Semantic simulations engaged more inferior temporal and lateral occipital regions. The only region that showed strong plausibility effects was the hippocampus, which was significantly engaged for implausible CFT but not for plausible CFT, suggestive of binding more disparate information. Consequences of these findings for the cognitive neuroscience of mental simulation are discussed. Published by Elsevier Inc.
Schmid, Annina B; Coppieters, Michel W
2011-12-01
A high prevalence of dual nerve disorders is frequently reported. How a secondary nerve disorder may develop following a primary nerve disorder remains largely unknown. Although still frequently cited, most explanatory theories were formulated many years ago. Considering recent advances in neuroscience, it is uncertain whether these theories still reflect current expert opinion. A Delphi study was conducted to update views on potential mechanisms underlying dual nerve disorders. In three rounds, seventeen international experts in the field of peripheral nerve disorders were asked to list possible mechanisms and rate their plausibility. Mechanisms with a median plausibility rating of ≥7 out of 10 were considered highly plausible. The experts identified fourteen mechanisms associated with a first nerve disorder that may predispose to the development of another nerve disorder. Of these fourteen mechanisms, nine have not previously been linked to double crush. Four mechanisms were considered highly plausible (impaired axonal transport, ion channel up or downregulation, inflammation in the dorsal root ganglia and neuroma-in-continuity). Eight additional mechanisms were listed which are not triggered by a primary nerve disorder, but may render the nervous system more vulnerable to multiple nerve disorders, such as systemic diseases and neurotoxic medication. Even though many mechanisms were classified as plausible or highly plausible, overall plausibility ratings varied widely. Experts indicated that a wide range of mechanisms has to be considered to better understand dual nerve disorders. Previously listed theories cannot be discarded, but may be insufficient to explain the high prevalence of dual nerve disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lamb, Frederick K.; Dorris, D.; Clare, A.; Van Wassenhove, S.; Yu, W.; Miller, M. C.
2006-09-01
The spin-frequency behavior of accretion-powered millisecond pulsars is usually inferred by power spectral analysis of their X-ray waveforms. The reported behavior of the spin frequencies of several accretion-powered millisecond pulsars is puzzling in two respects. First, analysis of the waveforms of these pulsars indicates that their spin frequencies are changing faster than predicted by the standard model of accretion torques. Second, there are wild swings of both signs in their apparent spin frequencies that are not correlated with the mass accretion rates inferred from their X-ray fluxes. We have computed the expected X-ray waveforms of pulsars like these, including special and general relativistic effects, and find that the changes in their waveforms produced by physically plausible changes in the flow of accreting matter onto their surfaces can explain their apparently anomalous spin-frequency behavior. This research was supported in part by NASA grant NAG 5-12030, NSF grant AST 0098399, and funds of the Fortner Endowed Chair at Illinois, and NSF grant AST 0098436 at Maryland.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Conroy, M.J.; Runge, M.C.; Nichols, J.D.; Stodola, K.W.; Cooper, R.J.
2011-01-01
The broad physical and biological principles behind climate change and its potential large scale ecological impacts on biota are fairly well understood, although likely responses of biotic communities at fine spatio-temporal scales are not, limiting the ability of conservation programs to respond effectively to climate change outside the range of human experience. Much of the climate debate has focused on attempts to resolve key uncertainties in a hypothesis-testing framework. However, conservation decisions cannot await resolution of these scientific issues and instead must proceed in the face of uncertainty. We suggest that conservation should precede in an adaptive management framework, in which decisions are guided by predictions under multiple, plausible hypotheses about climate impacts. Under this plan, monitoring is used to evaluate the response of the system to climate drivers, and management actions (perhaps experimental) are used to confront testable predictions with data, in turn providing feedback for future decision making. We illustrate these principles with the problem of mitigating the effects of climate change on terrestrial bird communities in the southern Appalachian Mountains, USA. ?? 2010 Elsevier Ltd.
Hundred Thousand Degree Gas in the Virgo Cluster of Galaxies
NASA Astrophysics Data System (ADS)
Sparks, W. B.; Pringle, J. E.; Carswell, R. F.; Donahue, M.; Martin, R.; Voit, M.; Cracraft, M.; Manset, N.; Hough, J. H.
2012-05-01
The physical relationship between low-excitation gas filaments at ~104 K, seen in optical line emission, and diffuse X-ray emitting coronal gas at ~107 K in the centers of many galaxy clusters is not understood. It is unclear whether the ~104 K filaments have cooled and condensed from the ambient hot (~107 K) medium or have some other origin such as the infall of cold gas in a merger, or the disturbance of an internal cool reservoir of gas by nuclear activity. Observations of gas at intermediate temperatures (~105-106 K) can potentially reveal whether the central massive galaxies are gaining cool gas through condensation or losing it through conductive evaporation and hence identify plausible scenarios for transport processes in galaxy cluster gas. Here we present spectroscopic detection of ~105 K gas spatially associated with the Hα filaments in a central cluster galaxy, M87, in the Virgo Cluster. The measured emission-line fluxes from triply ionized carbon (C IV 1549 Å) and singly ionized helium (He II 1640 Å) are consistent with a model in which thermal conduction determines the interaction between hot and cold phases.
Observations reveal external driver for Arctic sea-ice retreat
NASA Astrophysics Data System (ADS)
Notz, Dirk; Marotzke, Jochem
2012-04-01
The very low summer extent of Arctic sea ice that has been observed in recent years is often casually interpreted as an early-warning sign of anthropogenic global warming. For examining the validity of this claim, previously IPCC model simulations have been used. Here, we focus on the available observational record to examine if this record allows us to identify either internal variability, self-acceleration, or a specific external forcing as the main driver for the observed sea-ice retreat. We find that the available observations are sufficient to virtually exclude internal variability and self-acceleration as an explanation for the observed long-term trend, clustering, and magnitude of recent sea-ice minima. Instead, the recent retreat is well described by the superposition of an externally forced linear trend and internal variability. For the externally forced trend, we find a physically plausible strong correlation only with increasing atmospheric CO2 concentration. Our results hence show that the observed evolution of Arctic sea-ice extent is consistent with the claim that virtually certainly the impact of an anthropogenic climate change is observable in Arctic sea ice already today.
Equifinality and process-based modelling
NASA Astrophysics Data System (ADS)
Khatami, S.; Peel, M. C.; Peterson, T. J.; Western, A. W.
2017-12-01
Equifinality is understood as one of the fundamental difficulties in the study of open complex systems, including catchment hydrology. A review of the hydrologic literature reveals that the term equifinality has been widely used, but in many cases inconsistently and without coherent recognition of the various facets of equifinality, which can lead to ambiguity but also methodological fallacies. Therefore, in this study we first characterise the term equifinality within the context of hydrological modelling by reviewing the genesis of the concept of equifinality and then presenting a theoretical framework. During past decades, equifinality has mainly been studied as a subset of aleatory (arising due to randomness) uncertainty and for the assessment of model parameter uncertainty. Although the connection between parameter uncertainty and equifinality is undeniable, we argue there is more to equifinality than just aleatory parameter uncertainty. That is, the importance of equifinality and epistemic uncertainty (arising due to lack of knowledge) and their implications is overlooked in our current practice of model evaluation. Equifinality and epistemic uncertainty in studying, modelling, and evaluating hydrologic processes are treated as if they can be simply discussed in (or often reduced to) probabilistic terms (as for aleatory uncertainty). The deficiencies of this approach to conceptual rainfall-runoff modelling are demonstrated for selected Australian catchments by examination of parameter and internal flux distributions and interactions within SIMHYD. On this basis, we present a new approach that expands equifinality concept beyond model parameters to inform epistemic uncertainty. The new approach potentially facilitates the identification and development of more physically plausible models and model evaluation schemes particularly within the multiple working hypotheses framework, and is generalisable to other fields of environmental modelling as well.
Computational approaches to cognition: the bottom-up view.
Koch, C
1993-04-01
How can higher level aspects of cognition, such as figure-ground segregation, object recognition, selective focal attention and ultimately even awareness, be implemented at the level of synapses and neurons? A number of theoretical studies emerging out of the connectionist and the computational neuroscience communities are starting to address these issues using neural plausible models.
Memory colours affect colour appearance.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2016-01-01
Memory colour effects show that colour perception is affected by memory and prior knowledge and hence by cognition. None of Firestone & Scholl's (F&S's) potential pitfalls apply to our work on memory colours. We present a Bayesian model of colour appearance to illustrate that an interaction between perception and memory is plausible from the perspective of vision science.
Topic Models in Information Retrieval
2007-08-01
Information Processing Systems, Cambridge, MA, MIT Press, 2004. Brown, P.F., Della Pietra, V.J., deSouza, P.V., Lai, J.C. and Mercer, R.L., Class-based...2003. http://www.wkap.nl/prod/b/1-4020-1216-0. Croft, W.B., Lucia , T.J., Cringean, J., and Willett, P., Retrieving Documents By Plausible Inference
Some Additional Lessons from the Wechsler Scales: A Rejoinder to Kaufman and Keith.
ERIC Educational Resources Information Center
Macmann, Gregg M.; Barnett, David W.
1994-01-01
Reacts to previous arguments regarding verbal and performance constructs of Wechsler Scales. Contends that general factor model is more plausible representation of data for these scales. Suggests issue is moot when considered in regards to practical applications. Supports analysis of needed skills and instructional environments in educational…
Enrollment Simulation and Planning. Strategies & Solutions Series, No. 3.
ERIC Educational Resources Information Center
McIntyre, Chuck
Enrollment simulation and planning (ESP) is centered on the use of statistical models to describe how and why college enrollments fluctuate. College planners may use this approach with confidence to simulate any number of plausible future scenarios. Planners can then set a variety of possible college actions against these scenarios, and examine…
Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…
'Where's the flux' star: Exocomets, or Giant Impact?
NASA Astrophysics Data System (ADS)
Meng, Huan; Boyajian, Tabetha; Kennedy, Grant; Lisse, Carey; Marengo, Massimo; Wright, Jason; Wyatt, Mark
2015-12-01
The discovery of an unusual stellar light curve in the Kepler data of KIC 8462852 has sparked a media frenzy about 'alien megastructures' orbiting that star. Behind the public's excitement about 'aliens,' there is however a true science story: KIC 8462852 offers us a unique window to observe, in real time, the rare cataclysmic events happening in a mature extrasolar planetary system. After analysis of the existing constraints of the system, two possible models stand out as the plausible explanations for the light curve anomaly: immediate aftermath of a large planetary or planetesimal impact, or apparitions of a family of comets or comet fragments. The two plausible models predict very different IR evolution over the years following the transit events, providing a good diagnostic to distinguish them. With shallow mapping of the Kepler field in January 2015, Spitzer/IRAC has found KIC 8462852 with a marginal excess at 4.5 micron. Here, we propose to monitor KIC 8462852 on a regular basis to identify and track its IR excess evolution with deeper images and more accurate photometry.
Controls on the Archean climate system investigated with a global climate model.
Wolf, E T; Toon, O B
2014-03-01
The most obvious means of resolving the faint young Sun paradox is to invoke large quantities of greenhouse gases, namely, CO2 and CH4. However, numerous changes to the Archean climate system have been suggested that may have yielded additional warming, thus easing the required greenhouse gas burden. Here, we use a three-dimensional climate model to examine some of the factors that controlled Archean climate. We examine changes to Earth's rotation rate, surface albedo, cloud properties, and total atmospheric pressure following proposals from the recent literature. While the effects of increased planetary rotation rate on surface temperature are insignificant, plausible changes to the surface albedo, cloud droplet number concentrations, and atmospheric nitrogen inventory may each impart global mean warming of 3-7 K. While none of these changes present a singular solution to the faint young Sun paradox, a combination can have a large impact on climate. Global mean surface temperatures at or above 288 K could easily have been maintained throughout the entirety of the Archean if plausible changes to clouds, surface albedo, and nitrogen content occurred.
NASA Astrophysics Data System (ADS)
Groves, David G.; Yates, David; Tebaldi, Claudia
2008-12-01
Climate change may impact water resources management conditions in difficult-to-predict ways. A key challenge for water managers is how to incorporate highly uncertain information about potential climate change from global models into local- and regional-scale water management models and tools to support local planning. This paper presents a new method for developing large ensembles of local daily weather that reflect a wide range of plausible future climate change scenarios while preserving many statistical properties of local historical weather patterns. This method is demonstrated by evaluating the possible impact of climate change on the Inland Empire Utilities Agency service area in southern California. The analysis shows that climate change could impact the region, increasing outdoor water demand by up to 10% by 2040, decreasing local water supply by up to 40% by 2040, and decreasing sustainable groundwater yields by up to 15% by 2040. The range of plausible climate projections suggests the need for the region to augment its long-range water management plans to reduce its vulnerability to climate change.
Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport
NASA Astrophysics Data System (ADS)
Verma, Siddhartha
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc " 1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Dog walking is associated with more outdoor play and independent mobility for children.
Christian, Hayley; Trapp, Georgina; Villanueva, Karen; Zubrick, Stephen R; Koekemoer, Rachelle; Giles-Corti, Billie
2014-10-01
Dog ownership is positively associated with children's physical activity. It is plausible that dog-facilitated activity rather than dog ownership per se encourages children's physical activity behaviors. We examined relationships between dog walking and children's physical activity, and outdoor play and independent mobility. Cross-sectional survey data from the 2007 Perth (Western Australia) TRavel, Environment, and Kids (TREK) project were analyzed for 727 10-12 year olds with a family dog. Weekly minutes of overall physical activity and walking, local walking and outdoor play were collected from children and parents. Children's weekly pedometer steps were measured. Independent mobility was determined by active independent travel to 15 local destinations. Overall, 55% of children walked their dog. After adjustment, more dog walkers than non-dog walkers walked in the neighborhood (75% vs. 47%), played in the street (60% vs. 45%) and played in the yard (91% vs. 84%) (all p ≤ 0.05). Dog walkers were more independently mobile than non-dog walkers (p ≤ 0.001). Dog walking status was not associated with overall physical activity, walking, or pedometer steps (p>0.05). Dog-facilitated play and physical activity can be an effective strategy for increasing children's physical activity. Dog walking may provide a readily accessible and safe option for improving levels of independent mobility. Copyright © 2014 Elsevier Inc. All rights reserved.
Preservation of physical properties with Ensemble-type Kalman Filter Algorithms
NASA Astrophysics Data System (ADS)
Janjic, T.
2017-12-01
We show the behavior of the localized Ensemble Kalman filter (EnKF) with respect to preservation of positivity, conservation of mass, energy and enstrophy in toy models that conserve these properties. In order to preserve physical properties in the analysis as well as to deal with the non-Gaussianity in an EnKF framework, Janjic et al. 2014 proposed the use of physically based constraints in the analysis step to constrain the solution. In particular, constraints were used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In the study, mass and positivity were both preserved by formulating the filter update as a set of quadratic programming problems that incorporate nonnegativity constraints. Simple numerical experiments indicated that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that were more physically plausible both for individual ensemble members and for the ensemble mean. Moreover, in experiments designed to mimic the most important characteristics of convective motion, it is shown that the mass conservation- and positivity-constrained rain significantly suppresses noise seen in localized EnKF results. This is highly desirable in order to avoid spurious storms from appearing in the forecast starting from this initial condition (Lange and Craig 2014). In addition, the root mean square error is reduced for all fields and total mass of the rain is correctly simulated. Similarly, the enstrophy, divergence, as well as energy spectra can as well be strongly affected by localization radius, thinning interval, and inflation and depend on the variable that is observed (Zeng and Janjic, 2016). We constructed the ensemble data assimilation algorithm that conserves mass, total energy and enstrophy (Zeng et al., 2017). With 2D shallow water model experiments, it is found that the conservation of enstrophy within the data assimilation effectively avoids the spurious energy cascade of rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14-day deterministic and ensemble free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction.
Models, Data, and War: a Critique of the Foundation for Defense Analyses.
1980-03-12
scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world
Racial/Ethnic Differences in Midlife Women’s Attitudes toward Physical Activity
Im, Eun-Ok; Ko, Young; Hwang, Hyenam; Chee, Wonshik; Stuifbergen, Alexa; Walker, Lorraine; Brown, Adama
2012-01-01
Introduction Women’s racial/ethnic-specific attitudes toward physical activity have been pointed out as a plausible reason for their low participation rates in physical activity. However, very little is actually known about racial/ethnic commonalties and differences in midlife women’s attitudes toward physical activity. The purpose of this study was to explore commonalities and differences in midlife women’s attitudes toward physical activity among four major racial/ethnic groups in the United States (whites, Hispanics, African Americans, and Asians). Methods This was a secondary analysis of the qualitative data from a larger study that explored midlife women’s attitudes toward physical activity. Qualitative data from four racial/ethnic-specific online forums among 90 midlife women were used for this study. The data were analyzed using thematic analysis, and themes reflecting commonalties and differences in the women’s attitudes toward physical activity across the racial/ethnic groups were extracted. Results The themes reflecting the commonalities were: (a) “physical activity is good for health”; (b) “not as active as I could be”; (c) “physical activity was not encouraged”; (d) “inherited diseases motivated participation in physical activity”; and (e) “lack of accessibility to physical activity.” The themes reflecting the differences were: (a) “physical activity as necessity or luxury”; (b) “organized versus natural physical activity”; (c) “individual versus family-oriented physical activity”; and (d) “beauty ideal or culturally accepted physical appearance.” Discussion Developing an intervention that could change the social influences and environmental factors and that could incorporate the women’s racial/ethnic-specific attitudes would be a priority in increasing physical activity of racial/ethnic minority midlife women. PMID:23931661
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel
2016-04-01
The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The work has been partially supported by RFBR grant 14-01-00125 and RAS Presidium II.4P. References: [1] Penenko V.V., Tsvetova E.A., Penenko A.V. Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry // IZVESTIYA ATMOSPHERIC AND OCEANIC PHYSICS, 2015, v 51 , p. 311 - 319 [2] A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. [3] A. Penenko; V. Penenko; R. Nuterman; A. Baklanov and A. Mahura Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model, Proc. SPIE 9680, 21st International Symposium Atmospheric and Ocean Optics: Atmospheric Physics, 968076 (November 19, 2015); doi:10.1117/12.2206008;http://dx.doi.org/10.1117/12.2206008
NASA Astrophysics Data System (ADS)
Mauritsen, T.; Stevens, B. B.
2015-12-01
Current climate models exhibit equilibrium climate sensitivities to a doubling of CO2 of 2.0-4.6 K and a weak increase of global mean precipitation. But inferences from the observational record place climate sensitivity near the lower end of the range, and indicate that models underestimate changes in certain aspects of the hydrological cycle under warming. Here we show that both these discrepancies can be explained by a controversial hypothesis of missing negative tropical feedbacks in climate models, known as the iris-effect: Expanding dry and clear regions in a warming climate yield a negative feedback as more infrared radiation can escape to space through this metaphorical opening iris. At the same time the additional infrared cooling of the atmosphere must be balanced by latent heat release thereby accelerating the hydrological cycle. Alternative suggestions of too little aerosol cooling, missing volcanic eruptions, or insufficient ocean heat uptake in models may explain a slow observed transient warming, but are not able to explain the observed enhanced hydrological cycle. We propose that a temperature-dependency of the extent to which precipitating convective clouds cluster or aggregate into larger clouds constitutes a plausible physical mechanism for the iris-effect. On a large scale, organized convective states are dryer than disorganized convection and therefore radiate more in the longwave to space. Thus, if a warmer atmosphere can host more organized convection, then this represents one possible mechanism for an iris-effect. The challenges in modeling, understanding and possibly quantifying a temperature-dependency of convection are, however, substantial.
Lenzenweger, Mark F
2015-01-01
During World War II, the Office of Strategic Services (OSS), the forerunner of the Central Intelligence Agency, sought the assistance of clinical psychologists and psychiatrists to establish an assessment program for evaluating candidates for the OSS. The assessment team developed a novel and rigorous program to evaluate OSS candidates. It is described in Assessment of Men: Selection of Personnel for the Office of Strategic Services (OSS Assessment Staff, 1948). This study examines the sole remaining multivariate data matrix that includes all final ratings for a group of candidates (n = 133) assessed near the end of the assessment program. It applies the modern statistical methods of both exploratory and confirmatory factor analysis to this rich and highly unique data set. An exploratory factor analysis solution suggested 3 factors underlie the OSS assessment staff ratings. Confirmatory factor analysis results of multiple plausible substantive models reveal that a 3-factor model provides the best fit to these data. The 3 factors are emotional/interpersonal factors (social relations, emotional stability, security), intelligence processing (effective IQ, propaganda skills, observing and reporting), and agency/surgency (motivation, energy and initiative, leadership, physical ability). These factors are discussed in terms of their potential utility for personnel selection within the intelligence community.
Paleophysical oceanography with an emphasis on transport rates.
Huybers, Peter; Wunsch, Carl
2010-01-01
Paleophysical oceanography is the study of the behavior of the fluid ocean of the past, with a specific emphasis on its climate implications, leading to a focus on the general circulation. Even if the circulation is not of primary concern, heavy reliance on deep-sea cores for past climate information means that knowledge of the oceanic state when the sediments were laid down is a necessity. Like the modern problem, paleoceanography depends heavily on observations, and central difficulties lie with the very limited data types and coverage that are, and perhaps ever will be, available. An approximate separation can be made into static descriptors of the circulation (e.g., its water-mass properties and volumes) and the more difficult problem of determining transport rates of mass and other properties. Determination of the circulation of the Last Glacial Maximum is used to outline some of the main challenges to progress. Apart from sampling issues, major difficulties lie with physical interpretation of the proxies, transferring core depths to an accurate timescale (the "age-model problem"), and understanding the accuracy of time-stepping oceanic or coupled-climate models when run unconstrained by observations. Despite the existence of many plausible explanatory scenarios, few features of the paleocirculation in any period are yet known with certainty.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Simulation of minimally invasive vascular interventions for training purposes.
Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J
2004-01-01
To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.
Linking river management to species conservation using dynamic landscape scale models
Freeman, Mary C.; Buell, Gary R.; Hay, Lauren E.; Hughes, W. Brian; Jacobson, Robert B.; Jones, John W.; Jones, S.A.; LaFontaine, Jacob H.; Odom, Kenneth R.; Peterson, James T.; Riley, Jeffrey W.; Schindler, J. Stephen; Shea, C.; Weaver, J.D.
2013-01-01
Efforts to conserve stream and river biota could benefit from tools that allow managers to evaluate landscape-scale changes in species distributions in response to water management decisions. We present a framework and methods for integrating hydrology, geographic context and metapopulation processes to simulate effects of changes in streamflow on fish occupancy dynamics across a landscape of interconnected stream segments. We illustrate this approach using a 482 km2 catchment in the southeastern US supporting 50 or more stream fish species. A spatially distributed, deterministic and physically based hydrologic model is used to simulate daily streamflow for sub-basins composing the catchment. We use geographic data to characterize stream segments with respect to channel size, confinement, position and connectedness within the stream network. Simulated streamflow dynamics are then applied to model fish metapopulation dynamics in stream segments, using hypothesized effects of streamflow magnitude and variability on population processes, conditioned by channel characteristics. The resulting time series simulate spatially explicit, annual changes in species occurrences or assemblage metrics (e.g. species richness) across the catchment as outcomes of management scenarios. Sensitivity analyses using alternative, plausible links between streamflow components and metapopulation processes, or allowing for alternative modes of fish dispersal, demonstrate large effects of ecological uncertainty on model outcomes and highlight needed research and monitoring. Nonetheless, with uncertainties explicitly acknowledged, dynamic, landscape-scale simulations may prove useful for quantitatively comparing river management alternatives with respect to species conservation.
Postseismic deformation following the 2015 Gorkha earthquake and implications for rheology
NASA Astrophysics Data System (ADS)
Rollins, C.; Gualandi, A.; Avouac, J. P.; Liu, J.; Zhang, Z.
2017-12-01
The 2015 Mw 7.9 Gorkha earthquake ruptured the lower, northern edge of the interseismically locked section of the Main Himalayan Thrust (MHT). Independent Component Analysis of location timeseries at GPS stations in Nepal and Tibet reveals significant transient postseismic motion following the mainshock. In order to probe the frictional properties of the MHT and the viscoelastic properties of the crust and upper mantle, we compare the extracted postseismic motions to those predicted by forward models of afterslip and viscoelastic relaxation. Postseismic displacements are minimal south of the coseismic rupture, suggesting that minimal afterslip occurred there and that the upper MHT remains mostly locked. North of the rupture, postseismic displacements feature south-southwest horizontal motion and uplift, each on the order of a few cm in the first postseismic year. A model of stress-driven afterslip extending 100 km north of the coseismic rupture reproduces the horizontal postseismic timeseries and the general pattern of uplift and subsidence; however, this model significantly overpredicts the uplift at stations overlying the rupture, and the down-dip extent of afterslip may be unrealistic. Viscoelastic relaxation in the high-temperature Tibetan crust reproduces the observed SSW motion without overpredicting the uplift; viscoelastic relaxation in the downgoing Indian mantle, however, produces northward motion and subsidence north of the rupture, i.e. opposite to the observed motions. We argue that models of coupled afterslip (confined close to the rupture) and viscoelastic relaxation can reproduce the postseismic timeseries with physically plausible parameters.
Projections of Rapidly Rising Temperatures over Africa Under Low Mitigation
NASA Technical Reports Server (NTRS)
Engelbrecht, Francois; Adegoke, Jimmy; Bopape, Mary-Jane; Naidoo, Mogesh; Garland, Rebecca; Thatcher, Marcus; McGregor, John; Katzfe, Jack; Werner, Micha; Ichoku, Charles;
2015-01-01
An analysis of observed trends in African annual-average near-surface temperatures over the last five decades reveals drastic increases, particularly over parts of the subtropics and central tropical Africa. Over these regions, temperatures have been rising at more than twice the global rate of temperature increase. An ensemble of high-resolution downscalings, obtained using a single regional climate model forced with the sea-surface temperatures and sea-ice fields of an ensemble of global circulation model (GCM) simulations, is shown to realistically represent the relatively strong temperature increases observed in subtropical southern and northern Africa. The amplitudes of warming are generally underestimated, however. Further warming is projected to occur during the 21st century, with plausible increases of 4-6 C over the subtropics and 3-5 C over the tropics by the end of the century relative to present-day climate under the A2 (a low mitigation) scenario of the Special Report on Emission Scenarios. High impact climate events such as heat-wave days and high fire-danger days are consistently projected to increase drastically in their frequency of occurrence. General decreases in soil-moisture availability are projected, even for regions where increases in rainfall are plausible, due to enhanced levels of evaporation. The regional downscalings presented here, and recent GCM projections obtained for Africa, indicate that African annual-averaged temperatures may plausibly rise at about 1.5 times the global rate of temperature increase in the subtropics, and at a somewhat lower rate in the tropics. These projected increases although drastic, may be conservative given the model underestimations of observed temperature trends. The relatively strong rate of warming over Africa, in combination with the associated increases in extreme temperature events, may be key factors to consider when interpreting the suitability of global mitigation targets in terms of African climate change and climate change adaptation in Africa.
NASA Astrophysics Data System (ADS)
Anderson, Christian Carl
This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.