Brase, Gary L.; Hill, W. Trey
2015-01-01
Bayesian reasoning, defined here as the updating of a posterior probability following new information, has historically been problematic for humans. Classic psychology experiments have tested human Bayesian reasoning through the use of word problems and have evaluated each participant’s performance against the normatively correct answer provided by Bayes’ theorem. The standard finding is of generally poor performance. Over the past two decades, though, progress has been made on how to improve Bayesian reasoning. Most notably, research has demonstrated that the use of frequencies in a natural sampling framework—as opposed to single-event probabilities—can improve participants’ Bayesian estimates. Furthermore, pictorial aids and certain individual difference factors also can play significant roles in Bayesian reasoning success. The mechanics of how to build tasks which show these improvements is not under much debate. The explanations for why naturally sampled frequencies and pictures help Bayesian reasoning remain hotly contested, however, with many researchers falling into ingrained “camps” organized around two dominant theoretical perspectives. The present paper evaluates the merits of these theoretical perspectives, including the weight of empirical evidence, theoretical coherence, and predictive power. By these criteria, the ecological rationality approach is clearly better than the heuristics and biases view. Progress in the study of Bayesian reasoning will depend on continued research that honestly, vigorously, and consistently engages across these different theoretical accounts rather than staying “siloed” within one particular perspective. The process of science requires an understanding of competing points of view, with the ultimate goal being integration. PMID:25873904
Brase, Gary L; Hill, W Trey
2015-01-01
Bayesian reasoning, defined here as the updating of a posterior probability following new information, has historically been problematic for humans. Classic psychology experiments have tested human Bayesian reasoning through the use of word problems and have evaluated each participant's performance against the normatively correct answer provided by Bayes' theorem. The standard finding is of generally poor performance. Over the past two decades, though, progress has been made on how to improve Bayesian reasoning. Most notably, research has demonstrated that the use of frequencies in a natural sampling framework-as opposed to single-event probabilities-can improve participants' Bayesian estimates. Furthermore, pictorial aids and certain individual difference factors also can play significant roles in Bayesian reasoning success. The mechanics of how to build tasks which show these improvements is not under much debate. The explanations for why naturally sampled frequencies and pictures help Bayesian reasoning remain hotly contested, however, with many researchers falling into ingrained "camps" organized around two dominant theoretical perspectives. The present paper evaluates the merits of these theoretical perspectives, including the weight of empirical evidence, theoretical coherence, and predictive power. By these criteria, the ecological rationality approach is clearly better than the heuristics and biases view. Progress in the study of Bayesian reasoning will depend on continued research that honestly, vigorously, and consistently engages across these different theoretical accounts rather than staying "siloed" within one particular perspective. The process of science requires an understanding of competing points of view, with the ultimate goal being integration.
NASA Technical Reports Server (NTRS)
Dittrich, Ralph T
1957-01-01
An experimental investigation of combustor total-pressure loss was undertaken to confirm previous theoretical analyses of effects of geometric and flow variables and of heat addition. The results indicate that a reasonable estimate of cold-flow total-pressure-loss coefficient may be obtained from the theoretical analyses. Calculated total-pressure loss due to heat addition agreed with experimental data only when there was no flame ejection from the liner at the upstream air-entry holes.
Reasoning and memory: People make varied use of the information available in working memory.
Hardman, Kyle O; Cowan, Nelson
2016-05-01
Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Reasoning and memory: People make varied use of the information available in working memory
Hardman, Kyle O.; Cowan, Nelson
2015-01-01
Working memory (WM) is used for storing information in a highly-accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information in order to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components in order to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that in order to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. PMID:26569436
Estimating effects of alcohol tax increases on highway fatalities
DOT National Transportation Integrated Search
1989-12-01
There can be no doubt that tax increases which raise the price of all alcoholic beverages : will reduce the overall consumption of alcohol which in turn will reduce highway : accidents and fatalities. Both theoretical reasoning about the effects of t...
NASA Astrophysics Data System (ADS)
Shevyrin, A. A.; Pogosov, A. G.; Budantsev, M. V.; Bakarov, A. K.; Toropov, A. I.; Ishutkin, S. V.; Shesterikov, E. V.; Kozhukhov, A. S.; Kosolobov, S. S.; Gavrilova, T. A.
2012-12-01
Mechanical stresses are investigated in suspended nanowires made on the basis of GaAs/AlGaAs heterostructures. Though there are no intentionally introduced stressor layers in the heterostructure, the nanowires are subject to Euler buckling instability. In the wide nanowires, the out-of-plane buckling is observed at length significantly smaller (3 times) than the theoretically estimated critical value, while in the narrow nanowires, the experimentally measured critical length of the in-plane buckling coincides with the theoretical estimation. The possible reasons for the obtained discrepancy are considered. The observed peculiarities should be taken into account in the fabrication of nanomechanical and nanoelectromechanical systems.
The detectability of brown dwarfs - Predictions and uncertainties
NASA Technical Reports Server (NTRS)
Nelson, L. A.; Rappaport, S.; Joss, P. C.
1993-01-01
In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.
Conceptual Challenges in Coordinating Theoretical and Data-Centered Estimates of Probability
ERIC Educational Resources Information Center
Konold, Cliff; Madden, Sandra; Pollatsek, Alexander; Pfannkuch, Maxine; Wild, Chris; Ziedins, Ilze; Finzer, William; Horton, Nicholas J.; Kazak, Sibel
2011-01-01
A core component of informal statistical inference is the recognition that judgments based on sample data are inherently uncertain. This implies that instruction aimed at developing informal inference needs to foster basic probabilistic reasoning. In this article, we analyze and critique the now-common practice of introducing students to both…
Some Simple Solutions to the Problem of Predicting Boundary-Layer Self-Induced Pressures
NASA Technical Reports Server (NTRS)
Bertram, Mitchel H.; Blackstock, Thomas A.
1961-01-01
Simplified theoretical approaches are shown, based on hypersonic similarity boundary-layer theory, which allow reasonably accurate estimates to be made of the surface pressures on plates on which viscous effects are important. The consideration of viscous effects includes the cases where curved surfaces, stream pressure gradients, and leadingedge bluntness are important factors.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
Collisions involving antiprotons and antihydrogen: an overview
NASA Astrophysics Data System (ADS)
Jonsell, S.
2018-03-01
I give an overview of experimental and theoretical results for antiproton and antihydrogen scattering with atoms and molecules (in particular H, He). At low energies (>1 keV) there are practically no experimental data available. Instead I compare the results from different theoretical calculations, of various degrees of sophistication. At energies up to a few tens of eV, I focus on simple approximations that give reasonably accurate results, as these allow quick estimates of collision rates without embarking on a research project. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Goldman, L. J.; Seasholtz, R. G.
1982-01-01
Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.
Parallel and Distributed Systems for Probabilistic Reasoning
2012-12-01
work at CMU I had the opportunity to work with Andreas Krause on Gaussian process models for signal quality estimation in wireless sensor networks ...we reviewed the natural parallelization of the belief propagation algorithm using the synchronous schedule and demonstrated both theoretically and...problem is that the power-law sparsity structure, commonly found in graphs derived from natural phenomena (e.g., social networks and the web
NASA Astrophysics Data System (ADS)
Tomita, Shota; Yanagitani, Takahiko; Takayanagi, Shinji; Ichihashi, Hayato; Shibagaki, Yoshiaki; Hayashi, Hiromichi; Matsukawa, Mami
2017-06-01
Longitudinal wave velocity dispersion in ZnO single crystals, owing to the acoustoelectric effect, has been investigated by Brillouin scattering. The resistivity dependence of the longitudinal wave velocity in a c-plane ZnO single crystal was theoretically estimated and experimentally investigated. Velocity dispersion owing to the acoustoelectric effect was observed in the range 0.007-10 Ωm. The observed velocity dispersion shows a similar tendency to the theoretical estimation and gives the piezoelectric stiffened and unstiffened wave velocities. However, the measured dispersion curve shows a characteristic shift from the theoretical curve. One possible reason is the carrier mobility in the sample, which could be lower than the reported value. The measurement data gave the piezoelectric stiffened and unstiffened longitudinal wave velocities, from which the electromechanical coupling coefficient k33 was determined. The value of k33 is in good agreement with reported values. This method is promising for noncontact evaluation of electromechanical coupling. In particular, it could be for evaluation of the unknown piezoelectricity in the thickness direction of semiconductive materials and film resonators.
Topics in global convergence of density estimates
NASA Technical Reports Server (NTRS)
Devroye, L.
1982-01-01
The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.
On Short-Time Estimation of Vocal Tract Length from Formant Frequencies
Lammert, Adam C.; Narayanan, Shrikanth S.
2015-01-01
Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102
Pursuing Improvement in Clinical Reasoning: The Integrated Clinical Education Theory.
Jessee, Mary Ann
2018-01-01
The link between clinical education and development of clinical reasoning is not well supported by one theoretical perspective. Learning to reason during clinical education may be best achieved in a supportive sociocultural context of nursing practice that maximizes reasoning opportunities and facilitates discourse and meaningful feedback. Prelicensure clinical education seldom incorporates these critical components and thus may fail to directly promote clinical reasoning skill. Theoretical frameworks supporting the development of clinical reasoning during clinical education were evaluated. Analysis of strengths and gaps in each framework's support of clinical reasoning development was conducted. Commensurability of philosophical underpinnings was confirmed, and complex relationships among key concepts were elucidated. Six key concepts and three tenets comprise an explanatory predictive theory-the integrated clinical education theory (ICET). ICET provides critical theoretical support for inquiry and action to promote clinical education that improves development of clinical reasoning skill. [J Nurs Educ. 2018;57(1):7-13.]. Copyright 2018, SLACK Incorporated.
Teaching for clinical reasoning - helping students make the conceptual links.
McMillan, Wendy Jayne
2010-01-01
Dental educators complain that students struggle to apply what they have learnt theoretically in the clinical context. This paper is premised on the assumption that there is a relationship between conceptual thinking and clinical reasoning. The paper provides a theoretical framework for understanding the relationship between conceptual learning and clinical reasoning. A review of current literature is used to explain the way in which conceptual understanding influences clinical reasoning and the transfer of theoretical understandings to the clinical context. The paper argues that the connections made between concepts are what is significant about conceptual understanding. From this point of departure the paper describes teaching strategies that facilitate the kinds of learning opportunities that students need in order to develop conceptual understanding and to be able to transfer knowledge from theoretical to clinical contexts. Along with a variety of teaching strategies, the value of concept maps is discussed. The paper provides a framework for understanding the difficulties that students have in developing conceptual networks appropriate for later clinical reasoning. In explaining how students learn for clinical application, the paper provides a theoretical framework that can inform how dental educators facilitate the conceptual learning, and later clinical reasoning, of their students.
Differential Game Theory Application to Intelligent Missile Guidance
2013-06-01
guidance (OG) and game theoretic guidance ( GTG ). One reason for this development is the fact that the implementation hardware for the guidance system has...state estimation techniques such as the Kalman Filter and others, it is now feasible to implement the OG, GTG and GTG +AI ‘intelligent’ guidance on...both PN and APN as special cases of OG and GTG ; this connection is further explored in this report. The desire to reduce weapon life-cycle cost
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements
NASA Astrophysics Data System (ADS)
Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua
2017-10-01
A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Satellite Power Systems (SPS) space transportation cost analysis and evaluation
NASA Technical Reports Server (NTRS)
1980-01-01
A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.
A Theory of Utility Conditionals: Paralogical Reasoning from Decision-Theoretic Leakage
ERIC Educational Resources Information Center
Bonnefon, Jean-Francois
2009-01-01
Many "if p, then q" conditionals have decision-theoretic features, such as antecedents or consequents that relate to the utility functions of various agents. These decision-theoretic features leak into reasoning processes, resulting in various paralogical conclusions. The theory of utility conditionals offers a unified account of the various forms…
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures
NASA Astrophysics Data System (ADS)
Nayatani, Yoshinobu; Sobagaki, Hiroaki
The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.
A theory of utility conditionals: Paralogical reasoning from decision-theoretic leakage.
Bonnefon, Jean-François
2009-10-01
Many "if p, then q" conditionals have decision-theoretic features, such as antecedents or consequents that relate to the utility functions of various agents. These decision-theoretic features leak into reasoning processes, resulting in various paralogical conclusions. The theory of utility conditionals offers a unified account of the various forms that this phenomenon can take. The theory is built on 2 main components: (1) a representational tool (the utility grid), which summarizes in compact form the decision-theoretic features of a conditional, and (2) a set of folk axioms of decision, which reflect reasoners' beliefs about the way most agents make their decisions. Applying the folk axioms to the utility grid of a conditional allows for the systematic prediction of the paralogical conclusions invited by the utility grid's decision-theoretic features. The theory of utility conditionals significantly extends the scope of current theories of conditional inference and moves reasoning research toward a greater integration with decision-making research.
1993-06-04
34 J In a paper entitled "Understanding and Developing Combat Power," by Colonel Huba Vass de Czege, a method identifying analytical techniques for...reiterates several important doctrinal and theoretical requirements for the de ’elopment of 9« an optimal «valuation criteria nodal. Although...Methode de Ralsonnenent Tactlque" (The Tactical Reasoning Method)’". Is a version of concurrent COA analysis under conditions at uncertainty. Figure
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
Meta-analysis of the effect of natural frequencies on Bayesian reasoning.
McDowell, Michelle; Jacobs, Perke
2017-12-01
The natural frequency facilitation effect describes the finding that people are better able to solve descriptive Bayesian inference tasks when represented as joint frequencies obtained through natural sampling, known as natural frequencies, than as conditional probabilities. The present meta-analysis reviews 20 years of research seeking to address when, why, and for whom natural frequency formats are most effective. We review contributions from research associated with the 2 dominant theoretical perspectives, the ecological rationality framework and nested-sets theory, and test potential moderators of the effect. A systematic review of relevant literature yielded 35 articles representing 226 performance estimates. These estimates were statistically integrated using a bivariate mixed-effects model that yields summary estimates of average performances across the 2 formats and estimates of the effects of different study characteristics on performance. These study characteristics range from moderators representing individual characteristics (e.g., numeracy, expertise), to methodological differences (e.g., use of incentives, scoring criteria) and features of problem representation (e.g., short menu format, visual aid). Short menu formats (less computationally complex representations showing joint-events) and visual aids demonstrated some of the strongest moderation effects, improving performance for both conditional probability and natural frequency formats. A number of methodological factors (e.g., exposure to both problem formats) were also found to affect performance rates, emphasizing the importance of a systematic approach. We suggest how research on Bayesian reasoning can be strengthened by broadening the definition of successful Bayesian reasoning to incorporate choice and process and by applying different research methodologies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Zhang, Wei
2011-07-01
The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.
Real-time value-driven diagnosis
NASA Technical Reports Server (NTRS)
Dambrosio, Bruce
1995-01-01
Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.
Hennessy, Michael; Bleakley, Amy; Ellithorpe, Morgan
2018-03-01
The reasoned action approach is one of the most successful behavioral theories in the history of social psychology. This study outlines the theoretical principles of reasoned action and considers when it is appropriate to augment it with a new variable. To demonstrate, we use survey data collected from a 4 to 17 year old U.S. adolescents to test how the 'prototype' variables fit into reasoned action approach. Through confirmatory factor analysis, we find that the prototype measures are normative pressure measures and when treated as a separate theoretical construct, prototype identity is not completely mediated by the proximal predictors of behavioral intention. We discuss the assumptions of the two theories and finally consider the distinction between augmenting a specific theory versus combining measures derived from different theoretical perspectives.
Atomistic determination of flexoelectric properties of crystalline dielectrics
NASA Astrophysics Data System (ADS)
Maranganti, R.; Sharma, P.
2009-08-01
Upon application of a uniform strain, internal sublattice shifts within the unit cell of a noncentrosymmetric dielectric crystal result in the appearance of a net dipole moment: a phenomenon well known as piezoelectricity. A macroscopic strain gradient on the other hand can induce polarization in dielectrics of any crystal structure, even those which possess a centrosymmetric lattice. This phenomenon, called flexoelectricity, has both bulk and surface contributions: the strength of the bulk contribution can be characterized by means of a material property tensor called the bulk flexoelectric tensor. Several recent studies suggest that strain-gradient induced polarization may be responsible for a variety of interesting and anomalous electromechanical phenomena in materials including electromechanical coupling effects in nonuniformly strained nanostructures, “dead layer” effects in nanocapacitor systems, and “giant” piezoelectricity in perovskite nanostructures among others. In this work, adopting a lattice dynamics based microscopic approach we provide estimates of the flexoelectric tensor for certain cubic crystalline ionic salts, perovskite dielectrics, III-V and II-VI semiconductors. We compare our estimates with experimental/theoretical values wherever available and also revisit the validity of an existing empirical scaling relationship for the magnitude of flexoelectric coefficients in terms of material parameters. It is interesting to note that two independent groups report values of flexoelectric properties for perovskite dielectrics that are orders of magnitude apart: Cross and co-workers from Penn State have carried out experimental studies on a variety of materials including barium titanate while Catalan and co-workers from Cambridge used theoretical ab initio techniques as well as experimental techniques to study paraelectric strontium titanate as well as ferroelectric barium titanate and lead titanate. We find that, in the case of perovskite dielectrics, our estimates agree to an order of magnitude with the experimental and theoretical estimates for strontium titanate. For barium titanate however, while our estimates agree to an order of magnitude with existing ab initio calculations, there exists a large discrepancy with experimental estimates. The possible reasons for the observed deviations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Peter
An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.
Measuring the Reliability of Picture Story Exercises like the TAT
Gruber, Nicole; Kreuzpointner, Ludwig
2013-01-01
As frequently reported, psychometric assessments on Picture Story Exercises, especially variations of the Thematic Apperception Test, mostly reveal inadequate scores for internal consistency. We demonstrate that the reason for this apparent shortcoming is not caused by the coding system itself but from the incorrect use of internal consistency coefficients, especially Cronbach’s α. This problem could be eliminated by using the category-scores as items instead of the picture-scores. In addition to a theoretical explanation we prove mathematically why the use of category-scores produces an adequate internal consistency estimation and examine our idea empirically with the origin data set of the Thematic Apperception Test by Heckhausen and two additional data sets. We found generally higher values when using the category-scores as items instead of picture-scores. From an empirical and theoretical point of view, the estimated reliability is also superior to each category within a picture as item measuring. When comparing our suggestion with a multifaceted Rasch-model we provide evidence that our procedure better fits the underlying principles of PSE. PMID:24348902
Delgado, J; Liao, J C
1992-01-01
The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632
The mean intensity of radiation at 2 microns in the solar neighborhood
NASA Technical Reports Server (NTRS)
Jura, M.
1979-01-01
Consideration is given to the value of the mean intensity at 2 microns in the solar neighborhood, and it is found that it is likely to be a factor of four greater than previously estimated on theoretical grounds. It is noted however, that the estimate does agree with a reasonable extrapolation of the results of the survey of the Galactic plane by the Japanese group. It is concluded that the mean intensity in the solar neighborhood therefore probably peaks somewhat longward of 1 micron, and that this result is important for understanding the temperature of interstellar dust and the intensity of the far infrared background. This means specifically that dark clouds probably emit significantly more far infrared radiation than previously predicted.
The value of body weight measurement to assess dehydration in children.
Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain
2013-01-01
Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child's individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06-0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2-4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08-0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03-0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be reconsidered.
The Value of Body Weight Measurement to Assess Dehydration in Children
Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain
2013-01-01
Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child’s individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06–0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2–4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08–0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03–0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be reconsidered. PMID:23383058
Prototypes Reflect Normative Perceptions: Implications for the Development of Reasoned Action Theory
Hennessy, Michael; Bleakley, Amy; Ellithorpe, Morgan
2017-01-01
The reasoned action approach is one of the most successful behavioral theories in the history of social psychology. This study outlines the theoretical principles of reasoned action and considers when it is appropriate to augment it with a new variable. To demonstrate, we use survey data collected from a 4–17 year old U.S. adolescents to test how the “prototype” variables fit into reasoned action approach. Through confirmatory factor analysis, we find that the prototype measures are normative pressure measures and when treated as a separate theoretical construct, prototype identity is not completely mediated by the proximal predictors of behavioral intention. We discuss the assumptions of the two theories and finally consider the distinction between augmenting a specific theory versus combining measures derived from different theoretical perspectives. PMID:28612624
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
NASA Technical Reports Server (NTRS)
Didwall, E. M.
1981-01-01
Low latitude magnetic field variations (magnetic storms) caused by large fluctuations in the equatorial ring current were derived from magnetic field magnitude data obtained by OGO 2, 4, and 6 satellites over an almost 5 year period. Analysis procedures consisted of (1) separating the disturbance field into internal and external parts relative to the surface of the Earth; (2) estimating the response function which related to the internally generated magnetic field variations to the external variations due to the ring current; and (3) interpreting the estimated response function using theoretical response functions for known conductivity profiles. Special consideration is given to possible ocean effects. A temperature profile is proposed using conductivity temperature data for single crystal olivine. The resulting temperature profile is reasonable for depths below 150-200 km, but is too high for shallower depths. Apparently, conductivity is not controlled solely by olivine at shallow depths.
NASA Astrophysics Data System (ADS)
Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping
2016-09-01
Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.
Clinical Reasoning in the Assessment and Intervention Planning for a Reading Disability
ERIC Educational Resources Information Center
Sotelo-Dynega, Marlene
2017-01-01
The purpose of this article is to provide the reader with insight into the clinical reasoning process involved in the assessment and intervention planning for a child with a reading disability. A Cattell-Horn-Carroll (CHC) theoretical/neuropsychological approach shall serve as the foundational theoretical framework for this case study, and…
Clarifying assumptions to enhance our understanding and assessment of clinical reasoning.
Durning, Steven J; Artino, Anthony R; Schuwirth, Lambert; van der Vleuten, Cees
2013-04-01
Deciding on a diagnosis and treatment is essential to the practice of medicine. Developing competence in these clinical reasoning processes, commonly referred to as diagnostic and therapeutic reasoning, respectively, is required for physician success. Clinical reasoning has been a topic of research for several decades, and much has been learned. However, there still exists no clear consensus regarding what clinical reasoning entails, let alone how it might best be taught, how it should be assessed, and the research and practice implications therein.In this article, the authors first discuss two contrasting epistemological views of clinical reasoning and related conceptual frameworks. They then outline four different theoretical frameworks held by medical educators that the authors believe guide educators' views on the topic, knowingly or not. Within each theoretical framework, the authors begin with a definition of clinical reasoning (from that viewpoint) and then discuss learning, assessment, and research implications. The authors believe these epistemologies and four theoretical frameworks also apply to other concepts (or "competencies") in medical education.The authors also maintain that clinical reasoning encompasses the mental processes and behaviors that are shared (or evolve) between the patient, physician, and the environment (i.e., practice setting). Clinical reasoning thus incorporates components of all three factors (patient, physician, environment). The authors conclude by outlining practical implications and potential future areas for research.
Phan, Hoang Vu; Park, Hoon Cheol
2018-04-18
Studies on wing kinematics indicate that flapping insect wings operate at higher angles of attack (AoAs) than conventional rotary wings. Thus, effectively flying an insect-like flapping-wing micro air vehicle (FW-MAV) requires appropriate wing design for achieving low power consumption and high force generation. Even though theoretical studies can be performed to identify appropriate geometric AoAs for a wing for achieving efficient hovering flight, designing an actual wing by implementing these angles into a real flying robot is challenging. In this work, we investigated the wing morphology of an insect-like tailless FW-MAV, which was named KUBeetle, for obtaining high vertical force/power ratio or power loading. Several deformable wing configurations with various vein structures were designed, and their characteristics of vertical force generation and power requirement were theoretically and experimentally investigated. The results of the theoretical study based on the unsteady blade element theory (UBET) were validated with reference data to prove the accuracy of power estimation. A good agreement between estimated and measured results indicated that the proposed UBET model can be used to effectively estimate the power requirement and force generation of an FW-MAV. Among the investigated wing configurations operating at flapping frequencies of 23 Hz to 29 Hz, estimated results showed that the wing with a suitable vein placed outboard exhibited an increase of approximately 23.7% ± 0.5% in vertical force and approximately 10.2% ± 1.0% in force/power ratio. The estimation was supported by experimental results, which showed that the suggested wing enhanced vertical force by approximately 21.8% ± 3.6% and force/power ratio by 6.8% ± 1.6%. In addition, wing kinematics during flapping motion was analyzed to determine the reason for the observed improvement.
Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu
2018-02-01
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
Teaching Children Real-World Knowledge and Reasoning.
ERIC Educational Resources Information Center
Williams, Wendy M.
2002-01-01
Introduces this special issue topic by asserting that empirically powerful and theoretically guided educational research needs to be designed with the teacher in mind. Provides rationale for research focus on real-world knowledge and reasoning, and reasons for selecting research projects on inductive reasoning, mathematical reasoning, map skills,…
Damping of lower hybrid waves by low-frequency drift waves
NASA Astrophysics Data System (ADS)
Krall, Nicholas A.
1989-11-01
The conditions under which a spectrum of lower hybrid drift waves will decay into low-frequency drift waves (LFD) are calculated. The purpose is to help understand why lower hybrid drift waves are not seen in all field-reversed configuration (FRC) experiments in which they are predicted. It is concluded that if there is in the plasma a LFD wave amplitude above a critical level, lower hybrid waves will decay into low-frequency drift waves. The critical level required to stabilize TRX-2 [Phys. Fluids 30, 1497 (1987)] is calculated and found to be reasonably consistent with theoretical estimates.
A useful observable for estimating keff in fast subcritical systems
NASA Astrophysics Data System (ADS)
Saracco, Paolo; Borreani, Walter; Chersola, Davide; Lomonaco, Guglielmo; Ricco, Gianni; Ripani, Marco
2017-09-01
The neutron multiplication factor keff is a key quantity to characterize subcritical neutron multiplying devices and for understanting their physical behaviour, being related to the fundamental eigenvalue of Boltzmann transport equation. Both the maximum available power - and all quantities related to it, like, e.g. the effectiveness in burning nuclear wastes - as well as reactor kinetics and dynamics depend on keff. Nevertheless, keff is not directly measurable and its determination results from the solution of an inverse problem: minimizing model dependence of the solution for keff then becomes a critical issue, relevant both for practical and theoretical reasons.
Channel effect of the modified powdery mixture of ammonium nitrate and fuel oil
NASA Astrophysics Data System (ADS)
Wu, Chun-Ping; Liu, Lian-Sheng; Wang, Xu-Guang; Liu, Yong; Wang, Yin-Jun
2010-10-01
The modified powdery mixture of ammonium nitrate and fuel oil (MPANFO) is a new breed of industrial explosives developed years ago in China. As one of the important properties of an industrial explosive, the channel effect of MPANFO was reported in this paper. A series of experiments were conducted to determine the channel effect of MPANFO. The blasthole diameter range was estimated to avoid the channel effect of MPANFO. Three empirical formulae for predicting the detonation length of MPANFO were provided in terms of the channel effect. Experiments and theoretical analysis indicate that the channel effect of MPANFO is very serious. The reason why the channel effect of MPANFO is worse than that of other industrial explosives is explained at a theoretical level. In addition, some properties of MPANFO, such as sympathetic distance, detonation velocity and brisance, are determined.
Effects of internal friction on contact formation dynamics of polymer chain
NASA Astrophysics Data System (ADS)
Bian, Yukun; Li, Peng; Zhao, Nanrong
2018-04-01
A theoretical framework is presented to study the contact formation dynamics of polymer chains, in accompany with an electron-transfer quenching. Based on a non-Markovian Smoluchowski equation supplemented with an exponential sink term, we derive the mean time of contact formation under Wilemski-Fixman approximation. Our particular attentions are paid to the effect of internal friction. We find out that internal friction induces a novel fractional viscosity dependence, which will become more remarkable as internal friction increases. Furthermore, we clarify that internal friction inevitably promotes a diffusion-controlled mechanism by slowing the chain relaxation. Finally, we apply our theory to rationalise the experimental investigation for contact formation of a single-stranded DNA. The theoretical results can reproduce the experimental data very well with quite reasonable estimation for the intrinsic parameters. Such good agreements clearly demonstrate the validity of our theory which has appropriately addressed the very role of internal friction to the relevant dynamics.
NASA Astrophysics Data System (ADS)
Borisov, S. P.; Bountin, D. A.; Gromyko, Yu. V.; Khotyanovsky, D. V.; Kudryavtsev, A. N.
2016-10-01
Development of disturbances in the supersonic boundary layer on sharp and blunted cones is studied both experimentally and theoretically. The experiments were conducted at the Transit-M hypersonic wind tunnel of the Institute of Theoretical and Applied Mechanics. Linear stability calculations use the basic flow profiles provided by the numerical simulations performed by solving the Navier-Stokes equations with the ANSYS Fluent and the in-house CFS3D code. Both the global pseudospectral Chebyshev method and the local iteration procedure are employed to solve the eigenvalue problem and determine linear stability characteristics. The calculated amplification factors for disturbances of various frequencies are compared with the experimentally measured pressure fluctuation spectra at different streamwise positions. It is shown that the linear stability calculations predict quite accurately the frequency of the most amplified disturbances and enable us to estimate reasonably well their relative amplitudes.
Calculating the social cost of illegal drugs: a theoretical approach.
Diomidous, Marianna; Zimeras, Stelios; Mechili, Aggelos
2013-01-01
The use of illegal drugs generates a wide range of social harms depending on various ways, according to the policy definition of the problem. The challenge is the way to model the impact of illegal drugs use during a long time period considering the factors that affects the process. Based on these models, estimation could be measured and prediction could be achieved. The illegal drugs use might affect the economic and social structure of the public system leading to direct and effective decisions to overcome the problematic. For that reason, calculation of social cost related to the use of illegal could be introduced over time (t) as a proposed social measure to define the variability of social indicator on society. In this work, a theoretical approach for the calculation of social cost of illegal drugs is proposed and models over time are defined.
Amthor, Jeffrey S
2010-12-01
The relationship between solar radiation capture and potential plant growth is of theoretical and practical importance. The key processes constraining the transduction of solar radiation into phyto-energy (i.e. free energy in phytomass) were reviewed to estimate potential solar-energy-use efficiency. Specifically, the out-put:input stoichiometries of photosynthesis and photorespiration in C(3) and C(4) systems, mobilization and translocation of photosynthate, and biosynthesis of major plant biochemical constituents were evaluated. The maintenance requirement, an area of important uncertainty, was also considered. For a hypothetical C(3) grain crop with a full canopy at 30°C and 350 ppm atmospheric [CO(2) ], theoretically potential efficiencies (based on extant plant metabolic reactions and pathways) were estimated at c. 0.041 J J(-1) incident total solar radiation, and c. 0.092 J J(-1) absorbed photosynthetically active radiation (PAR). At 20°C, the calculated potential efficiencies increased to 0.053 and 0.118 J J(-1) (incident total radiation and absorbed PAR, respectively). Estimates for a hypothetical C(4) cereal were c. 0.051 and c. 0.114 J J(-1), respectively. These values, which cannot be considered as precise, are less than some previous estimates, and the reasons for the differences are considered. Field-based data indicate that exceptional crops may attain a significant fraction of potential efficiency. © The Author (2010). Journal compilation © New Phytologist Trust (2010).
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Dickinson, Robert E.
2012-06-01
This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.
Improvements in geothermometry. Final technical report. Rev
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, J.; Dibble, W.; Parks, G.
1982-08-01
Alkali and alkaline earth geothermometers are useful for estimating geothermal reservoir temperatures, though a general theoretical basis has yet to be established and experimental calibration needs improvement. Equilibrium cation exchange between feldspars provided the original basis for the Na-K and Na-K-Ca geothermometers (Fournier and Truesdell, 1973), but theoretical, field and experimental evidence prove that neither equilibrium nor feldspars are necessary. Here, evidence is summarized in support of these observations, concluding that these geothermometers can be expected to have a surprisingly wide range of applicability, but that the reasons behind such broad applicability are not yet understood. Early experimental work provedmore » that water-rock interactions are slow at low temperatures, so experimental calibration at temperatures below 150/sup 0/ is impractical. Theoretical methods and field data were used instead for all work at low temperatures. Experimental methods were emphasized for temperatures above 150/sup 0/C, and the simplest possible solid and solution compositions were used to permit investigation of one process or question at a time. Unexpected results in experimental work prevented complete integration of the various portions of the investigation.« less
Theoretical methods for estimating moments of inertia of trees and boles.
John A. Sturos
1973-01-01
Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-01-01
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
[Clinical reasoning in undergraduate nursing education: a scoping review].
Menezes, Sáskia Sampaio Cipriano de; Corrêa, Consuelo Garcia; Silva, Rita de Cássia Gengo E; Cruz, Diná de Almeida Monteiro Lopes da
2015-12-01
This study aimed at analyzing the current state of knowledge on clinical reasoning in undergraduate nursing education. A systematic scoping review through a search strategy applied to the MEDLINE database, and an analysis of the material recovered by extracting data done by two independent reviewers. The extracted data were analyzed and synthesized in a narrative manner. From the 1380 citations retrieved in the search, 23 were kept for review and their contents were summarized into five categories: 1) the experience of developing critical thinking/clinical reasoning/decision-making process; 2) teaching strategies related to the development of critical thinking/clinical reasoning/decision-making process; 3) measurement of variables related to the critical thinking/clinical reasoning/decision-making process; 4) relationship of variables involved in the critical thinking/clinical reasoning/decision-making process; and 5) theoretical development models of critical thinking/clinical reasoning/decision-making process for students. The biggest challenge for developing knowledge on teaching clinical reasoning seems to be finding consistency between theoretical perspectives on the development of clinical reasoning and methodologies, methods, and procedures in research initiatives in this field.
El Hussein, Mohamed; Hirst, Sandra; Osuji, Joseph
2017-08-01
Delirium is an acute disorder of attention and cognition. It affects half of older adults in acute care settings and is a cause of increasing mortality and costs. Registered nurses (RNs) and licensed practical nurses (LPNs) frequently fail to recognize delirium. The goals of this research were to identify the reasoning processes that RNs and LPNs use to recognize delirium, to compare their reasoning processes, and to generate a theory that explains their clinical reasoning processes. Theoretical sampling was employed to elicit data from 28 participants using grounded theory methodology. Theoretical coding culminated in the emergence of Professional Socialization as the substantive theory. Professional Socialization emerged from participants' responses and was based on two social processes, specifically reasoning to uncover and reasoning to report. Professional Socialization makes explicit the similarities and variations in the clinical reasoning processes between RNs and LPNs and highlights their main concerns when interacting with delirious patients.
Exploring students' patterns of reasoning
NASA Astrophysics Data System (ADS)
Matloob Haghanikar, Mojgan
As part of a collaborative study of the science preparation of elementary school teachers, we investigated the quality of students' reasoning and explored the relationship between sophistication of reasoning and the degree to which the courses were considered inquiry oriented. To probe students' reasoning, we developed open-ended written content questions with the distinguishing feature of applying recently learned concepts in a new context. We devised a protocol for developing written content questions that provided a common structure for probing and classifying students' sophistication level of reasoning. In designing our protocol, we considered several distinct criteria, and classified students' responses based on their performance for each criterion. First, we classified concepts into three types: Descriptive, Hypothetical, and Theoretical and categorized the abstraction levels of the responses in terms of the types of concepts and the inter-relationship between the concepts. Second, we devised a rubric based on Bloom's revised taxonomy with seven traits (both knowledge types and cognitive processes) and a defined set of criteria to evaluate each trait. Along with analyzing students' reasoning, we visited universities and observed the courses in which the students were enrolled. We used the Reformed Teaching Observation Protocol (RTOP) to rank the courses with respect to characteristics that are valued for the inquiry courses. We conducted logistic regression for a sample of 18courses with about 900 students and reported the results for performing logistic regression to estimate the relationship between traits of reasoning and RTOP score. In addition, we analyzed conceptual structure of students' responses, based on conceptual classification schemes, and clustered students' responses into six categories. We derived regression model, to estimate the relationship between the sophistication of the categories of conceptual structure and RTOP scores. However, the outcome variable with six categories required a more complicated regression model, known as multinomial logistic regression, generalized from binary logistic regression. With the large amount of collected data, we found that the likelihood of the higher cognitive processes were in favor of classes with higher measures on inquiry. However, the usage of more abstract concepts with higher order conceptual structures was less prevalent in higher RTOP courses.
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran.
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations' performance and is carried out in the form of teamwork called participatory research. Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research.
Hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.-H.; Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100080; Wang, J.-X.
2004-12-01
Adopting the complete {alpha}{sub s}{sup 4} approach of the perturbative QCD and the updated parton distribution functions, we have estimated the hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*). In the estimate, special care has been paid to the dependence of the production amplitude on the derivative of the wave function at origin which is obtained by the potential model. For experimental references, main theoretical uncertainties are discussed, and the total cross section as well as the distributions of the production with reasonable cuts at the energies of Tevatron and CERN LHC are computed and presented properly.more » The results show that the P-wave production may contribute to the B{sub c}-meson production indirectly by a factor of about 0.5 of the direct production, and according to the estimated cross section, it is further worthwhile to study the possibility of observing the P-wave production itself experimentally.« less
Rogers, Paul; Fisk, John E; Lowrie, Emma
2017-11-01
The present study examines the extent to which stronger belief in either extrasensory perception, psychokinesis or life-after-death is associated with a proneness to making conjunction errors (CEs). One hundred and sixty members of the UK public read eight hypothetical scenarios and for each estimated the likelihood that two constituent events alone plus their conjunction would occur. The impact of paranormal belief plus constituents' conditional relatedness type, estimates of the subjectively less likely and more likely constituents plus relevant interaction terms tested via three Generalized Linear Mixed Models. General qualification levels were controlled for. As expected, stronger PK beliefs and depiction of a positively conditionally related (verses conditionally unrelated) constituent pairs predicted higher CE generation. ESP and LAD beliefs had no impact with, surprisingly, higher estimates of the less likely constituent predicting fewer - not more - CEs. Theoretical implications, methodological issues and ideas for future research are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gasperini, Paolo; Lolli, Barbara
2014-01-01
The argument proposed by Wason et al. that the conversion of magnitudes from a scale (e.g. Ms or mb) to another (e.g. Mw), using the coefficients computed by the general orthogonal regression method (Fuller) is biased if the observed values of the predictor (independent) variable are used in the equation as well as the methodology they suggest to estimate the supposedly true values of the predictor variable are wrong for a number of theoretical and empirical reasons. Hence, we advise against the use of such methodology for magnitude conversions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Suvam; Naghma, Rahla; Kaur, Jaspreet
The total and ionization cross sections for electron scattering by benzene, halobenzenes, toluene, aniline, and phenol are reported over a wide energy domain. The multi-scattering centre spherical complex optical potential method has been employed to find the total elastic and inelastic cross sections. The total ionization cross section is estimated from total inelastic cross section using the complex scattering potential-ionization contribution method. In the present article, the first theoretical calculations for electron impact total and ionization cross section have been performed for most of the targets having numerous practical applications. A reasonable agreement is obtained compared to existing experimental observationsmore » for all the targets reported here, especially for the total cross section.« less
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Berschauer, Andrew; Parker, Timothy W.; Vickers, Jesse E.
1989-01-01
A vortex gas lens concept is presented. Such a lens has a potential power density capability of 10 to the 9th - 10 to the 10th w/sq cm. An experimental prototype was constructed, and the divergence half angle of the exiting beam was measured as a function of the lens operating parameters. Reasonably good agreement is found between the experimental results and theoretical calculations. The expanded beam was observed to be steady, and no strong, potentially beam-degrading jets were found to issue from the ends of the lens. Estimates of random beam deflection angles to be expected due to boundary layer noise are presented; these angles are very small.
ERIC Educational Resources Information Center
Schulz, Andreas
2018-01-01
Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…
Race, Reason and Reasonableness: Toward an "Unreasonable" Pedagogy
ERIC Educational Resources Information Center
De Lissovoy, Noah
2016-01-01
Starting from the contemporary critical-theoretical notion of an "objective violence" that organizes social reality in capitalism, including processes of systemic racism, as well as from phenomenological inquiries into processes of race and identity, this article explores the relationship between racism and reasonableness in education…
Properties of iron under core conditions
NASA Astrophysics Data System (ADS)
Brown, J. M.
2003-04-01
Underlying an understanding of the geodynamo and evolution of the core is knowledge of the physical and chemical properties of iron and iron mixtures under high pressure and temperature conditions. Key properties include the viscosity of the fluid outer core, thermal diffusivity, equations-of-state, elastic properties of solid phases, and phase equilibria for iron and iron-dominated mixtures. As is expected for work that continues to tax technological and intellectual limits, controversy has followed both experimental and theoretical progress in this field. However, estimates for the melting temperature of the inner core show convergence and the equation-of-state for iron as determined in independent experiments and theories are in remarkable accord. Furthermore, although the structure and elastic properties of the solid inner-core phase remains uncertain, theoretical and experimental underpinnings are better understood and substantial progress is likely in the near future. This talk will focus on an identification of properties that are reasonably well known and those that merit further detailed study. In particular, both theoretical and experimental (static and shock wave) determinations of the density of iron under extreme conditions are in agreement at the 1% or better level. The behavior of the Gruneisen parameter (which determines the geothermal gradient and controls much of the outer core heat flux) is constrained by experiment and theory under core conditions for both solid and liquid phases. Recent experiments and theory are suggestive of structure or structures other than the high-pressure hexagonal close-packed (HCP) phase. Various theories and experiments for the elasticity of HCP iron remain in poor accord. Uncontroversial constraints on core chemistry will likely never be possible. However, reasonable bounds are possible on the basis of seismic profiles, geochemical arguments, and determinations of sound velocities and densities at high pressure and temperature.
[The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].
Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R
1996-02-01
To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.
Practical characterization of quantum devices without tomography
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Flammia, Steven; Silva, Marcus; Liu, Yi-Kai; Poulin, David
2012-02-01
Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. Part of the reason for this complexity is that tomography generates much more information than is usually sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the ?delity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a signi?cant reduction in resources compared to tomography. In particular, we show how to estimate the ?delity between a predicted pure state and an arbitrary experimental state using only a constant number of Pauli expectation values selected at random according to an importance-weighting rule. In addition, we propose methods for certifying quantum circuits and learning continuous-time quantum dynamics that are described by local Hamiltonians or Lindbladians.
Development of the reasons for living inventory for young adults.
Gutierrez, Peter M; Osman, Augustine; Barrios, Francisco X; Kopper, Beverly A; Baker, Monty T; Haraburda, Cheryl M
2002-04-01
Assessment of the reliability, validity, and predictive power of a new measure, the Reasons for Living Inventory for Young Adults (RFL-YA) is described. A series of three studies was conducted at two Midwestern universities to develop initial items for this new measure, refine item selection, and demonstrate the psychometric properties of the RFL-YA. The theoretical differences between the RFL-YA and the College Student Reasons for Living Inventory (CS-RFL) are discussed. Although the two measures were not directly compared, it appears that the RFL-YA has greater specificity for exploring aspects of the protective construct and may be more parsimonious than the CS-RFL. Principal-axis factor analysis yielded a five-factor solution for the RFL-YA accounting for 61.5% of the variance. This five-factor oblique model was confirmed in the final phase of investigation. Alpha estimates for the five subscales ranged from.89 to.94. Concurrent, convergent-discriminant, and criterion validity also were demonstrated. The importance of assessing protective factors in addition to negative risk factors for suicidality is discussed. Directions for future research with the RFL-YA also are discussed. Copyright 2002 Wiley Periodicals, Inc.
Ford, Jason A; Ong, Julianne
2014-11-01
The current research examines whether measures associated with Akers' social learning theory are related to non-medical use of prescription stimulants for academic reasons among college students. We examine data from a sample of 549 undergraduate students at one public university in the Southeastern United States. We estimate several logistic regression models to test our hypotheses. The findings indicated that roughly 17% of students reported non-medical use of prescription stimulants for academic reasons during the past year. In separate models, all four of the social learning measures were significantly correlated to non-medical use. In the complete model, the risk of non-medical prescription stimulant use for academic reasons was increased for respondents who reported more of their friends used and also for respondents who believed that prescription stimulants were an effective study aid. The current research fills an important gap in the literature regarding theoretical explanations for non-medical prescription stimulant use. Given the high prevalence of non-medical prescription stimulant use and the known risks associated with non-medical use this research can help inform intervention strategies for college populations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Intuitive Interference in Probabilistic Reasoning
ERIC Educational Resources Information Center
Babai, Reuven; Brecher, Tali; Stavy, Ruth; Tirosh, Dina
2006-01-01
One theoretical framework which addresses students' conceptions and reasoning processes in mathematics and science education is the intuitive rules theory. According to this theory, students' reasoning is affected by intuitive rules when they solve a wide variety of conceptually non-related mathematical and scientific tasks that share some common…
Identifying Kinds of Reasoning in Collective Argumentation
ERIC Educational Resources Information Center
Conner, AnnaMarie; Singletary, Laura M.; Smith, Ryan C.; Wagner, Patty Anne; Francisco, Richard T.
2014-01-01
We combine Peirce's rule, case, and result with Toulmin's data, claim, and warrant to differentiate between deductive, inductive, abductive, and analogical reasoning within collective argumentation. In this theoretical article, we illustrate these kinds of reasoning in episodes of collective argumentation using examples from one…
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Background: Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. Materials and Methods: This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations’ performance and is carried out in the form of teamwork called participatory research. Results: Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. Conclusion: we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research. PMID:23555113
Isomer ratios for products of photonuclear reactions on 121Sb
NASA Astrophysics Data System (ADS)
Bezshyyko, Oleg; Dovbnya, Anatoliy; Golinka-Bezshyyko, Larisa; Kadenko, Igor; Vodin, Oleksandr; Olejnik, Stanislav; Tuller, Gleb; Kushnir, Volodymyr; Mitrochenko, Viktor
2017-09-01
Over the past several years various preequilibrium model approaches for nuclear reactions were developed. Diversified detailed experimental data in the medium excitation energy region for nucleus are needed for reasonable selection among these theoretical models. Lack of experimental data in this energy region does essentially limit the possibilities for analysis and comparison of different preequilibrium theoretical models. For photonuclear reactions this energy region extends between bremsstrahlung energies nearly 30-100 MeV. Experimental measurements and estimations of isomer ratios for products of photonuclear reactions with multiple particle escape on antimony have been performed using bremsstrahlung with end-point energies 38, 43 and 53 MeV. Method of induced activity measurement was applied. For acquisition of gamma spectra we used HPGe spectrometer with 20% efficiency and energy resolution 1.9 keV for 1332 keV gamma line of 60Co. Linear accelerator of electrons LU-40 was a source of bremsstrahlung. Energy resolution of electron beam was about 1% and mean current was within (3.8-5.3) μA.
Epidemiology of radiation-induced cancer.
Radford, E P
1983-01-01
The epidemiology of radiation-induced cancer is important for theoretical and practical insights that these studies give to human cancer in general and because we have more evidence from radiation-exposed populations than for any other environmental carcinogen. On theoretical and experimental grounds, the linear no-threshold dose-response relationship is a reasonable basis for extrapolating effects to low doses. Leukemia is frequently the earliest observed radiogenic cancer but is now considered to be of minor importance, because the radiation effect dies out after 25 or 30 years, whereas solid tumors induced by radiation develop later and the increased cancer risk evidently persists for the remaining lifetime. Current estimates of the risk of particular cancers from radiation exposure cannot be fully evaluated until the population under study have been followed at least 40 or 50 years after exposure. Recent evidence indicates that for lung cancer induction, combination of cigarette smoking and radiation exposure leads to risks that are not multiplicative but rather nearly additive. PMID:6653538
Cornelisse, C J; Hermens, W T; Joe, M T; Duijndam, W A; van Duijn, P
1976-11-01
A numerical method was developed for computing the steady-state concentration gradient of a diffusible enzyme reaction product in a membrane-limited compartment of a simplified theoretical cell model. In cytochemical enzyme reactions proceeding according to the metal-capture principle, the local concentration of the primary reaction product is an important factor in the onset of the precipitation process and in the distribution of the final reaction product. The following variables were incorporated into the model: enzyme activity, substrate concentration, Km, diffusion coefficient of substrate and product, particle radius and cell radius. The method was applied to lysosomal acid phosphatase. Numerical values for the variables were estimated from experimental data in the literature. The results show that the calculated phosphate concentrations inside lysosomes are several orders of magnitude lower than the critical concentrations for efficient phosphate capture found in a previous experimental model study. Reasons for this apparent discrepancy are discussed.
NASA Astrophysics Data System (ADS)
Chowdhury, D. P.; Guin, R.; Saha, S. K.; Sudersanan, M.
2003-11-01
Experimental cross sections of a number of reaction channels of 16O ion induced reactions on natural copper target have been determined at different energies in the range of 50-110 MeV of 16O projectile by stacked foil activation technique. The cross sections have been compared with theoretical calculations using the computer code ALICE-91. The experimental values compared reasonably well with the corresponding theoretical estimates. The results indicate no significant role of incomplete fusion process in the 16O induced reactions on natural copper in the energy range of ⩽7 MeV/nucleon. As heavy ion beam produces an extremely narrow layer of activities in the surface of a material, these reactions could be useful for thin layer activation (TLA) study. The purpose of this work is to apply heavy ion activation in TLA technique for the study of surface wear with increased sensitivity.
NASA Technical Reports Server (NTRS)
Haglund, G. T.; Kane, E. J.
1974-01-01
The analysis of the 14 low-altitude transonic flights showed that the prevailing meteorological consideration of the acoustic disturbances below the cutoff altitude during threshold Mach number flight has shown that a theoretical safe altitude appears to be valid over a wide range of meteorological conditions and provides a reasonable estimate of the airplane ground speed reduction to avoid sonic boom noise during threshold Mach number flight. Recent theoretical results for the acoustic pressure waves below the threshold Mach number caustic showed excellent agreement with observations near the caustic, but the predicted overpressure levels were significantly lower than those observed far from the caustic. The analysis of caustics produced by inadvertent low-magnitude accelerations during flight at Mach numbers slightly greater than the threshold Mach number showed that folds and associated caustics were produced by slight changes in the airplane ground speed. These caustic intensities ranged from 1 to 3 time the nominal steady, level flight intensity.
The environmental factors as reason for emotional tension
NASA Astrophysics Data System (ADS)
Prisniakova, L.
The information from environment is a reason of activation of an organism, it calls abrupt changings in nervous processes and it offers emotions. One part of emotions organizes and supports activity, others disorganize it. In fields of perception, of making decision, fulfilment of operatings, of learning the emotional excitation raises the level of carrying-out more easy problems and reduces of more difficult one. The report are presented the outcomes of quantitative determination of a level of emotional tension on successful activity. The inverse of the sign of influencing on efficiency of activity of the man is detected. The action of the emotional tension on efficiency of professional work was demonstrated to have similarly to influencing of motivation according to the law Yerkes -Dodson. The report introduces a mathematical model of connection of successful activity and motivations or the emotional tension. Introduced in the report the outcomes can serve the theoretical idealized basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of selection
Do Students Need to Be Taught How to Reason?
ERIC Educational Resources Information Center
Kuhn, Deanna
2009-01-01
In this theoretical essay, the author addresses the existence of divergent evidence, portraying both competence and lack of competence in a fundamental realm of higher order thinking--causal and scientific reasoning--and explores the educational implications. Evidence indicates that these higher order reasoning skills are not ones that can be…
What Physicians Reason about during Admission Case Review
ERIC Educational Resources Information Center
Juma, Salina; Goldszmidt, Mark
2017-01-01
Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach--an iterative and inductive process of…
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
Determination of the matrix element V(ub) from inclusive B meson decays
NASA Astrophysics Data System (ADS)
Low, Ian
For years the extraction of |Vub| was tainted by large errors due to theoretical uncertainties. Because of our inability to calculate hadronic dynamics, we are forced to resort to ad hoc models when making theoretical predictions, hence introduce errors which are very hard to quantify. However, an accurate measurement of |Vub| is very important for testing the Cabbibo-Kobayashi-Maskawa picture of CP violation in the minimal standard model. It is highly desirable to be able to extract |Vub| with well-defined and reasonable theoretical uncertainties. In this dissertation, a strategy to extract |Vub| from the electron energy spectrum of the inclusive semi-leptonic B decays is proposed, without having to model the hadronic dynamics. It is based on the observation that the long distance physics involving hadronization, of which we are ignorant, is insensitive to the short distance interactions. Therefore, the uncalculable part in B → Xuℓn is the same as that in the radiative B decays B → Xsgamma. We are able to write down an analytic expression for Vub2/ V*tsVtb in terms of known functions. The theoretical uncertainty in this method is well-defined and estimated to be less than 10% in | Vub|. We also apply our method to the case of hadronic mass spectrum of the inclusive semi-leptonic decays, which has the virtue that the quark-hadron duality is expected to work better.
The case for probabilistic forecasting in hydrology
NASA Astrophysics Data System (ADS)
Krzysztofowicz, Roman
2001-08-01
That forecasts should be stated in probabilistic, rather than deterministic, terms has been argued from common sense and decision-theoretic perspectives for almost a century. Yet most operational hydrological forecasting systems produce deterministic forecasts and most research in operational hydrology has been devoted to finding the 'best' estimates rather than quantifying the predictive uncertainty. This essay presents a compendium of reasons for probabilistic forecasting of hydrological variates. Probabilistic forecasts are scientifically more honest, enable risk-based warnings of floods, enable rational decision making, and offer additional economic benefits. The growing demand for information about risk and the rising capability to quantify predictive uncertainties create an unparalleled opportunity for the hydrological profession to dramatically enhance the forecasting paradigm.
Dynamics of complete and incomplete fusion in heavy ion collisions
NASA Astrophysics Data System (ADS)
Bao, Xiao Jun; Guo, Shu Qing; Zhang, Hong Fei; Li, Jun Qing
2018-02-01
In order to study the influence of the strong Coulomb and nuclear interactions on the dynamics of complete and incomplete fusion, we construct a new four-variable master equation (ME) so that the deformations as well as the nucleon transfer are viewed as consistently governed by MEs in the potential energy surface of the system. The calculated yields of quasifission fragments and evaporation residue cross section (ERCS) are in agreement with experimental data of hot fusion reactions. Comparing cross sections by theoretical results and experimental data, we find the improved dinuclear sysytem model also describes the transfer cross sections reasonably. The production cross sections of new neutron-rich isotopes are estimated by the multinucleon transfer reactions.
Phylogenetic Analyses: A Toolbox Expanding towards Bayesian Methods
Aris-Brosou, Stéphane; Xia, Xuhua
2008-01-01
The reconstruction of phylogenies is becoming an increasingly simple activity. This is mainly due to two reasons: the democratization of computing power and the increased availability of sophisticated yet user-friendly software. This review describes some of the latest additions to the phylogenetic toolbox, along with some of their theoretical and practical limitations. It is shown that Bayesian methods are under heavy development, as they offer the possibility to solve a number of long-standing issues and to integrate several steps of the phylogenetic analyses into a single framework. Specific topics include not only phylogenetic reconstruction, but also the comparison of phylogenies, the detection of adaptive evolution, and the estimation of divergence times between species. PMID:18483574
Ab initio Eliashberg Theory: Making Genuine Predictions of Superconducting Features
NASA Astrophysics Data System (ADS)
Sanna, Antonio; Flores-Livas, José A.; Davydov, Arkadiy; Profeta, Gianni; Dewhurst, Kay; Sharma, Sangeeta; Gross, E. K. U.
2018-04-01
We present an application of Eliashberg theory of superconductivity to study a set of novel superconducting systems with a wide range of structural and chemical properties. The set includes three intercalated group-IV honeycomb layered structures, SH3 at 200 GPa (the superconductor with the highest measured critical temperature), the similar system SeH3 at 150 GPa, and a lithium doped mono-layer of black phosphorus. The theoretical approach we adopt is a recently developed, fully ab initio Eliashberg approach that takes into account the Coulomb interaction in a full energy-resolved fashion avoiding any free parameters like μ*. This method provides reasonable estimations of superconducting properties, including TC and the excitation spectra of superconductors.
Cosmic strings and superconducting cosmic strings
NASA Technical Reports Server (NTRS)
Copeland, Edmund
1988-01-01
The possible consequences of forming cosmic strings and superconducting cosmic strings in the early universe are discussed. Lecture 1 describes the group theoretic reasons for and the field theoretic reasons why cosmic strings can form in spontaneously broken gauge theories. Lecture 2 discusses the accretion of matter onto string loops, emphasizing the scenario with a cold dark matter dominated universe. In lecture 3 superconducting cosmic strings are discussed, as is a mechanism which leads to the formation of structure from such strings.
Theoretical kinetics study of the F((2)P) + NH3 hydrogen abstraction reaction.
Espinosa-Garcia, J; Fernandez-Ramos, A; Suleimanov, Y V; Corchado, J C
2014-01-23
The hydrogen abstraction reaction of fluorine with ammonia represents a true chemical challenge because it is very fast, is followed by secondary abstraction reactions, which are also extremely fast, and presents an experimental/theoretical controversy about rate coefficients. Using a previously developed full-dimensional analytical potential energy surface, we found that the F + NH3 → HF + NH2 system is a barrierless reaction with intermediate complexes in the entry and exit channels. In order to understand the reactivity of the title reaction, thermal rate coefficidents were calculated using two approaches: ring polymer molecular dynamics and quasi-classical trajectory calculations, and these were compared with available experimental data for the common temperature range 276-327 K. The theoretical results obtained show behavior practically independent of temperature, reproducing Walther-Wagner's experiment, but in contrast with Persky's more recent experiment. However, quantitatively, our results are 1 order of magnitude larger than those of Walther-Wagner and reasonably agree with the Persky at the lowest temperature, questioning so Walther-Wagner's older data. At present, the reason for this discrepancy is not clear, although we point out some possible reasons in the light of current theoretical calculations.
Consequences of Contextual Factors on Clinical Reasoning in Resident Physicians
ERIC Educational Resources Information Center
McBee, Elexis; Ratcliffe, Temple; Picho, Katherine; Artino, Anthony R., Jr.; Schuwirth, Lambert; Kelly, William; Masel, Jennifer; van der Vleuten, Cees; Durning, Steven J.
2015-01-01
Context specificity and the impact that contextual factors have on the complex process of clinical reasoning is poorly understood. Using situated cognition as the theoretical framework, our aim was to evaluate the verbalized clinical reasoning processes of resident physicians in order to describe what impact the presence of contextual factors have…
Geometric Reasoning in an Active-Engagement Upper-Division E&M Classroom
ERIC Educational Resources Information Center
Cerny, Leonard Thomas
2012-01-01
A combination of theoretical perspectives is used to create a rich description of student reasoning when facing a highly-geometric electricity and magnetism problem in an upper-division active-engagement physics classroom at Oregon State University. Geometric reasoning as students encounter problem situations ranging from familiar to novel is…
The Importance of Directly Asking Students Their Reasons for Attending Higher Education
ERIC Educational Resources Information Center
Kennett, Deborah J.; Reed, Maureen J.; Lam, Dianne
2011-01-01
Few studies have directly asked undergraduate students their reasons for coming to institutions for higher learning and, instead, have been developed based on theoretical rationale. We asked undergraduate students to list all of their reasons for attending university and to indicate those most important. Overall, students reported more than five…
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2018-03-01
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
NASA Astrophysics Data System (ADS)
Lawson, Anton E.; Baker, William P.; Didonato, Lisa; Verdi, Michael P.; Johnson, Margaret A.
Two hypotheses about theoretical concept acquisition, application, and change were tested. College biology students classified as intuitive, transitional, or reflective (hypothetico-deductive) reasoners were first taught two theoretical concepts (molecular polarity and bonding) to explain the mixing of dye with water, but not with oil, when all three were shaken in a container. The students were then tested in a context in which they misapplied the concepts in an attempt to explain the gradual spread of blue dye in standing water. Next students were taught another theoretical concept (diffusion), with and without the use of physical analogues. They were retested to see which students acquired the concept of diffusion and which students changed from use of the incorrect polarity and bonding concepts (i.e., the misconceptions) to use of the diffusion concept to correctly explain the dye's gradual spread. As predicted, the experimental/analogy group scored significantly higher than the control group on a posttest question that required the definition of diffusion. Also as predicted, hypothetico-deductive reasoning skill was significantly related to correct application of the diffusion concept and to a change from the misapplication of the polarity and bonding concepts to the correct application of the diffusion concept to explain the gradual spread of the blue dye. Thus, the results support the hypotheses that physical analogues are helpful in theoretical concept acquisition and that hypothetico-deductive reasoning is needed for successful concept application and change. Educational implications are drawn.
Design of Supersonic Transport Flap Systems for Thrust Recovery at Subsonic Speeds
NASA Technical Reports Server (NTRS)
Mann, Michael J.; Carlson, Harry W.; Domack, Christopher S.
1999-01-01
A study of the subsonic aerodynamics of hinged flap systems for supersonic cruise commercial aircraft has been conducted using linear attached-flow theory that has been modified to include an estimate of attainable leading edge thrust and an approximate representation of vortex forces. Comparisons of theoretical predictions with experimental results show that the theory gives a reasonably good and generally conservative estimate of the performance of an efficient flap system and provides a good estimate of the leading and trailing-edge deflection angles necessary for optimum performance. A substantial reduction in the area of the inboard region of the leading edge flap has only a minor effect on the performance and the optimum deflection angles. Changes in the size of the outboard leading-edge flap show that performance is greatest when this flap has a chord equal to approximately 30 percent of the wing chord. A study was also made of the performance of various combinations of individual leading and trailing-edge flaps, and the results show that aerodynamic efficiencies as high as 85 percent of full suction are predicted.
Birds and insects as radar targets - A review
NASA Technical Reports Server (NTRS)
Vaughn, C. R.
1985-01-01
A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.
Passive tracking scheme for a single stationary observer
NASA Astrophysics Data System (ADS)
Chan, Y. T.; Rea, Terry
2001-08-01
While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.
The role of language in learning physics
NASA Astrophysics Data System (ADS)
Brookes, David T.
Many studies in PER suggest that language poses a serious difficulty for students learning physics. These difficulties are mostly attributed to misunderstanding of specialized terminology. This terminology often assigns new meanings to everyday terms used to describe physical models and phenomena. In this dissertation I present a novel approach to analyzing of the role of language in learning physics. This approach is based on the analysis of the historical development of physics ideas, the language of modern physicists, and students' difficulties in the areas of quantum mechanics, classical mechanics, and thermodynamics. These data are analyzed using linguistic tools borrowed from cognitive linguistics and systemic functional grammar. Specifically, I combine the idea of conceptual metaphor and grammar to build a theoretical framework that accounts for: (1) the role and function that language serves for physicists when they speak and reason about physical ideas and phenomena, (2) specific features of students' reasoning and difficulties that may be related to or derived from language that students read or hear. The theoretical framework is developed using the methodology of a grounded theoretical approach. The theoretical framework allows us to make predictions about the relationship between student discourse and their conceptual and problem solving difficulties. Tests of the theoretical framework are presented in the context of "heat" in thermodynamics and "force" in dynamics. In each case the language that students use to reason about the concepts of "heat" and "force" is analyzed using the theoretical framework. The results of this analysis show that language is very important in students' learning. In particular, students are (1) using features of physicists' conceptual metaphors to reason about physical phenomena, often overextending and misapplying these features, (2) drawing cues from the grammar of physicists' speech and writing to categorize physics concepts; this categorization of physics concepts plays a key role in students' ability to solve physics problems. In summary, I present a theoretical framework that provides a possible explanation of the role that language plays in learning physics. The framework also attempts to account for how and why physicists' language influences students in the way that it does.
Implementation science: a role for parallel dual processing models of reasoning?
Sladek, Ruth M; Phillips, Paddy A; Bond, Malcolm J
2006-01-01
Background A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Discussion Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice. Summary It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice. PMID:16725023
Implementation science: a role for parallel dual processing models of reasoning?
Sladek, Ruth M; Phillips, Paddy A; Bond, Malcolm J
2006-05-25
A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice. It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice.
Believers' estimates of God's beliefs are more egocentric than estimates of other people's beliefs
Epley, Nicholas; Converse, Benjamin A.; Delbosc, Alexa; Monteleone, George A.; Cacioppo, John T.
2009-01-01
People often reason egocentrically about others' beliefs, using their own beliefs as an inductive guide. Correlational, experimental, and neuroimaging evidence suggests that people may be even more egocentric when reasoning about a religious agent's beliefs (e.g., God). In both nationally representative and more local samples, people's own beliefs on important social and ethical issues were consistently correlated more strongly with estimates of God's beliefs than with estimates of other people's beliefs (Studies 1–4). Manipulating people's beliefs similarly influenced estimates of God's beliefs but did not as consistently influence estimates of other people's beliefs (Studies 5 and 6). A final neuroimaging study demonstrated a clear convergence in neural activity when reasoning about one's own beliefs and God's beliefs, but clear divergences when reasoning about another person's beliefs (Study 7). In particular, reasoning about God's beliefs activated areas associated with self-referential thinking more so than did reasoning about another person's beliefs. Believers commonly use inferences about God's beliefs as a moral compass, but that compass appears especially dependent on one's own existing beliefs. PMID:19955414
Charting the future course of rural health and remote health in Australia: Why we need theory.
Bourke, Lisa; Humphreys, John S; Wakerman, John; Taylor, Judy
2010-04-01
This paper argues that rural and remote health is in need of theoretical development. Based on the authors' discussions, reflections and critical analyses of literature, this paper proposes key reasons why rural and remote health warrants the development of theoretical frameworks. The paper cites five reasons why theory is needed: (i) theory provides an approach for how a topic is studied; (ii) theory articulates key assumptions in knowledge development; (iii) theory systematises knowledge, enabling it to be transferable; (iv) theory provides predictability; and (v) theory enables comprehensive understanding. This paper concludes with a call for theoretical development in both rural and remote health to expand its knowledge and be more relevant to improving health care for rural Australians.
Nash Equilibria in Theory of Reasoned Action
NASA Astrophysics Data System (ADS)
Almeida, Leando; Cruz, José; Ferreira, Helena; Pinto, Alberto Adrego
2009-08-01
Game theory and Decision Theory have been applied to many different areas such as Physics, Economics, Biology, etc. In its application to Psychology, we introduce, in the literature, a Game Theoretical Model of Planned Behavior or Reasoned Action by establishing an analogy between two specific theories. In this study we take in account that individual decision-making is an outcome of a process where group decisions can determine individual probabilistic behavior. Using Game Theory concepts, we describe how intentions can be transformed in behavior and according to the Nash Equilibrium, this process will correspond to the best individual decision/response taking in account the collective response. This analysis can be extended to several examples based in the Game Theoretical Model of Planned Behavior or Reasoned Action.
Memory Activation and the Availability of Explanations in Sequential Diagnostic Reasoning
ERIC Educational Resources Information Center
Mehlhorn, Katja; Taatgen, Niels A.; Lebiere, Christian; Krems, Josef F.
2011-01-01
In the field of diagnostic reasoning, it has been argued that memory activation can provide the reasoner with a subset of possible explanations from memory that are highly adaptive for the task at hand. However, few studies have experimentally tested this assumption. Even less empirical and theoretical work has investigated how newly incoming…
Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode
2010-05-01
The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.
Estimated abundance of wild burros surveyed on Bureau of Land Management Lands in 2014
Griffin, Paul C.
2015-01-01
The Bureau of Land Management (BLM) requires accurate estimates of the numbers of wild horses (Equus ferus caballus) and burros (Equus asinus) living on the lands it manages. For over ten years, BLM in Arizona has used the simultaneous double-observer method of recording wild burros during aerial surveys and has reported population estimates for those surveys that come from two formulations of a Lincoln-Petersen type of analysis (Graham and Bell, 1989). In this report, I provide those same two types of burro population analysis for 2014 aerial survey data from six herd management areas (HMAs) in Arizona, California, Nevada, and Utah. I also provide burro population estimates based on a different form of simultaneous double-observer analysis, now in widespread use for wild horse surveys that takes into account the potential effects on detection probability of sighting covariates including group size, distance, vegetative cover, and other factors (Huggins, 1989, 1991). The true number of burros present in the six areas surveyed was not known, so population estimates made with these three types of analyses cannot be directly tested for accuracy in this report. I discuss theoretical reasons why the Huggins (1989, 1991) type of analysis should provide less biased estimates of population size than the Lincoln-Petersen analyses and why estimates from all forms of double-observer analyses are likely to be lower than the true number of animals present in the surveyed areas. I note reasons why I suggest using burro observations made at all available distances in analyses, not only those within 200 meters of the flight path. For all analytical methods, small sample sizes of observed groups can be problematic, but that sample size can be increased over time for Huggins (1989, 1991) analyses by pooling observations. I note ways by which burro population estimates could be tested for accuracy when there are radio-collared animals in the population or when there are simultaneous double-observer surveys before and after a burro gather and removal.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
NASA Astrophysics Data System (ADS)
Singh, Suvam; Naghma, Rahla; Kaur, Jaspreet; Antony, Bobby
2016-07-01
The total and ionization cross sections for electron scattering by benzene, halobenzenes, toluene, aniline, and phenol are reported over a wide energy domain. The multi-scattering centre spherical complex optical potential method has been employed to find the total elastic and inelastic cross sections. The total ionization cross section is estimated from total inelastic cross section using the complex scattering potential-ionization contribution method. In the present article, the first theoretical calculations for electron impact total and ionization cross section have been performed for most of the targets having numerous practical applications. A reasonable agreement is obtained compared to existing experimental observations for all the targets reported here, especially for the total cross section.
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
David, Allan E.; Cole, Adam J.; Chertok, Beata; Park, Yoon Shin; Yang, Victor C.
2011-01-01
Magnetic nanoparticles (MNP) continue to draw considerable attention as potential diagnostic and therapeutic tools in the fight against cancer. Although many interacting forces present themselves during magnetic targeting of MNP to tumors, most theoretical considerations of this process ignore all except for the magnetic and drag forces. Our validation of a simple in vitro model against in vivo data, and subsequent reproduction of the in vitro results with a theoretical model indicated that these two forces do indeed dominate the magnetic capture of MNP. However, because nanoparticles can be subject to aggregation, and large MNP experience an increased magnetic force, the effects of surface forces on MNP stability cannot be ignored. We accounted for the aggregating surface forces simply by measuring the size of MNP retained from flow by magnetic fields, and utilized this size in the mathematical model. This presumably accounted for all particle-particle interactions, including those between magnetic dipoles. Thus, our “corrected” mathematical model provided a reasonable estimate of not only fractional MNP retention, but also predicted the regions of accumulation in a simulated capillary. Furthermore, the model was also utilized to calculate the effects of MNP size and spatial location, relative to the magnet, on targeting of MNPs to tumors. This combination of an in vitro model with a theoretical model could potentially assist with parametric evaluations of magnetic targeting, and enable rapid enhancement and optimization of magnetic targeting methodologies. PMID:21295085
NASA Astrophysics Data System (ADS)
Lhomme, J. P.; Monteny, B.
1982-03-01
This paper begins to recall new concepts concerning evapotranspiration as they have been specified by the round-table conference of Budapest in May 1977. The potential evaporation ( EP) is now defined as the evaporation of a crop whose all exchange surfaces (leaves, stalks,...) are saturated, i.e., covered with a thin film of water. It can be calculated by a theoretical formula of Penman type. We give the reasons why it is interesting to use grass potential evaporation ( EP g ) as reference. The empirical relationships to estimate in this case the net radiation and the aerodynamic component of the formula have been derived from measurements made in Ivory Coast (West Africa). The relationship (8) has been obtained. It gives the daily value of EP g in millimeters of water per day (mm/d). The values calculated by this formula are compared to measurements of grass maximal evapotranspiration ( ETM g ).
Stokes waves revisited: Exact solutions in the asymptotic limit
NASA Astrophysics Data System (ADS)
Davies, Megan; Chattopadhyay, Amit K.
2016-03-01
The Stokes perturbative solution of the nonlinear (boundary value dependent) surface gravity wave problem is known to provide results of reasonable accuracy to engineers in estimating the phase speed and amplitudes of such nonlinear waves. The weakling in this structure though is the presence of aperiodic "secular variation" in the solution that does not agree with the known periodic propagation of surface waves. This has historically necessitated increasingly higher-ordered (perturbative) approximations in the representation of the velocity profile. The present article ameliorates this long-standing theoretical insufficiency by invoking a compact exact n -ordered solution in the asymptotic infinite depth limit, primarily based on a representation structured around the third-ordered perturbative solution, that leads to a seamless extension to higher-order (e.g., fifth-order) forms existing in the literature. The result from this study is expected to improve phenomenological engineering estimates, now that any desired higher-ordered expansion may be compacted within the same representation, but without any aperiodicity in the spectral pattern of the wave guides.
Optical properties of extended-chain polymers under stress
NASA Astrophysics Data System (ADS)
Ramirez, Rafael G.; Eby, R. K.
1995-09-01
Birefringence and x-ray diffraction experiments have been carried out on Kevlar 49(superscript R) fibers under tensile stress to monitor structure changes under the stress field. The origin of the observed birefringence is discussed in some detail. Results from theoretical calculations using semi-empirical molecular orbital techniques are presented and contrasted to the experimental observations. The calculations involved the estimation of chain polarizability and were performed under simulated stress conditions using the AM1 Hamiltonian in MOPAC. Polarizability is then used to calculate the birefringence as a function of tensile stress, by using existing internal field theory. This theoretical approach is applied to predict the optical properties of highly oriented extended-chain polyethylene, as well as those for poly(p' phenylene therephtalamide); the latter being the base polymer in Kevlar fibers. Results reveal reasonable birefringence predictions when compared to available experimental results in the literature. Also, it is found that the contribution from orienting crystallites under the stress field, to the measured birefringence in Kevlar fibers, is only a small fraction of the total. However, the calculations predict a significant contribution from deformation (extension) at the molecular level.
Magnetic helicity balance at Taylor relaxed states sustained by AC helicity injection
NASA Astrophysics Data System (ADS)
Hirota, Makoto; Morrison, Philip J.; Horton, Wendell; Hattori, Yuji
2017-10-01
Magnitudes of Taylor relaxed states that are sustained by AC magnetic helicity injection (also known as oscillating field current drive, OFCD) are investigated numerically in a cylindrical geometry. Compared with the amplitude of the oscillating magnetic field at the skin layer (which is normalized to 1), the strength of the axial guide field Bz 0 is shown to be an important parameter. The relaxation process seems to be active only when Bz 0 < 1 . Moreover, in the case of weak guide field Bz 0 < 0.2 , a helically-symmetric relaxed state is self-generated instead of the axisymmetric reversed-field pinch. As a theoretical model, the helicity balance is considered in a similar way to R. G. O'Neill et al., where the helicity injection rate is directly equated with the dissipation rate at the Taylor states. Then, the bifurcation to the helical Taylor state is predicted theoretically and the estimated magnitudes of the relaxed states reasonably agree with numerical results as far as Bz 0 < 1 . This work was supported by JSPS KAKENHI Grant Number 16K05627.
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao
2016-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257
Granbom, Marianne; Himmelsbach, Ines; Haak, Maria; Löfqvist, Charlotte; Oswald, Frank; Iwarsson, Susanne
2014-04-01
The decision to relocate in old age is intricately linked to thoughts and desires to stay put. However, most research focuses either on strategies that allow people to age in place or on their reasons for relocation. There is a need for more knowledge on very old peoples' residential reasoning, including thoughts about aging in place and thoughts about relocation as one intertwined process evolving in everyday life. The aim of this study was to explore what we refer to as the process of residential reasoning and how it changes over time among very old people, and to contribute to the theoretical development regarding aging in place and relocation. Taking a longitudinal perspective, data stem from the ENABLE-AGE In-depth Study, with interviews conducted in 2003 followed up in interviews in 2011. The 16 participants of the present study were 80-89years at the time of the first interview. During analysis the Theoretical Model of Residential Normalcy by Golant and the Life Course Model of Environmental Experience by Rowles & Watkins were used as sensitizing concepts. The findings revealed changes in the process of residential reasoning that related to a wide variety of issues. Such issues included the way very old people use their environmental experience, their striving to build upon or dismiss attachment to place, and their attempts to maintain or regain residential normalcy during years of declining health and loss of independence. In addition, the changes in reasoning were related to end-of-life issues. The findings contribute to the theoretical discussion on aging in place, relocation as a coping strategy, and reattachment after moving in very old age. Copyright © 2014 Elsevier Inc. All rights reserved.
Zvolensky, Michael J; Vujanovic, Anka A; Miller, Marcel O Bonn; Bernstein, Amit; Yartz, Andrew R; Gregor, Kristin L; McLeish, Alison C; Marshall, Erin C; Gibson, Laura E
2007-09-01
The present investigation examined the relationships between anxiety sensitivity and motivation to quit smoking, barriers to smoking cessation, and reasons for quitting smoking among 329 adult daily smokers (160 females; M (age) = 26.08 years, SD = 10.92). As expected, after covarying for the theoretically relevant variables of negative affectivity, gender, Axis I psychopathology, nonclinical panic attack history, number of cigarettes smoked per day, and current levels of alcohol consumption, we found that anxiety sensitivity was significantly incrementally related to level of motivation to quit smoking as well as current barriers to quitting smoking. Partially consistent with the hypotheses, after accounting for the variance explained by other theoretically relevant variables, we found that anxiety sensitivity was significantly associated with self-control reasons for quitting smoking (intrinsic factors) as well as immediate reinforcement and social influence reasons for quitting (extrinsic factors). Results are discussed in relation to better understanding the role of anxiety sensitivity in psychological processes associated with smoking cessation.
Prike, Toby; Arnold, Michelle M; Williamson, Paul
2017-08-01
A growing body of research has shown people who hold anomalistic (e.g., paranormal) beliefs may differ from nonbelievers in their propensity to make probabilistic reasoning errors. The current study explored the relationship between these beliefs and performance through the development of a new measure of anomalistic belief, called the Anomalistic Belief Scale (ABS). One key feature of the ABS is that it includes a balance of both experiential and theoretical belief items. Another aim of the study was to use the ABS to investigate the relationship between belief and probabilistic reasoning errors on conjunction fallacy tasks. As expected, results showed there was a relationship between anomalistic belief and propensity to commit the conjunction fallacy. Importantly, regression analyses on the factors that make up the ABS showed that the relationship between anomalistic belief and probabilistic reasoning occurred only for beliefs about having experienced anomalistic phenomena, and not for theoretical anomalistic beliefs. Copyright © 2017 Elsevier Inc. All rights reserved.
Developing a Network of and for Geometric Reasoning
ERIC Educational Resources Information Center
Mamolo, Ami; Ruttenberg-Rozen, Robyn; Whiteley, Walter
2015-01-01
In this article, we develop a theoretical model for restructuring mathematical tasks, usually considered advanced, with a network of spatial visual representations designed to support geometric reasoning for learners of disparate ages, stages, strengths, and preparation. Through our geometric reworking of the well-known "open box…
Logics of Business Education for Sustainability
ERIC Educational Resources Information Center
Andersson, Pernilla; Öhman, Johan
2016-01-01
This paper explores various kinds of logics of "business education for sustainability" and how these "logics" position the subject business person, based on eight teachers' reasoning of their own practices. The concept of logics developed within a discourse theoretical framework is employed to analyse the teachers' reasoning.…
Scientific reasoning abilities of nonscience majors in physics-based courses
NASA Astrophysics Data System (ADS)
Moore, J. Christopher; Rubbo, Louis J.
2012-06-01
We have found that non-STEM (science, technology, engineering, and mathematics) majors taking either a conceptual physics or astronomy course at two regional comprehensive institutions score significantly lower preinstruction on the Lawson’s Classroom Test of Scientific Reasoning (LCTSR) in comparison to national average STEM majors. Based on LCTSR score, the majority of non-STEM students can be classified as either concrete operational or transitional reasoners in Piaget’s theory of cognitive development, whereas in the STEM population formal operational reasoners are far more prevalent. In particular, non-STEM students demonstrate significant difficulty with proportional and hypothetico-deductive reasoning. Prescores on the LCTSR are correlated with normalized learning gains on various concept inventories. The correlation is strongest for content that can be categorized as mostly theoretical, meaning a lack of directly observable exemplars, and weakest for content categorized as mostly descriptive, where directly observable exemplars are abundant. Although the implementation of research-verified, interactive engagement pedagogy can lead to gains in content knowledge, significant gains in theoretical content (such as force and energy) are more difficult with non-STEM students. We also observe no significant gains on the LCTSR without explicit instruction in scientific reasoning patterns. These results further demonstrate that differences in student populations are important when comparing normalized gains on concept inventories, and the achievement of significant gains in scientific reasoning requires a reevaluation of the traditional approach to physics for non-STEM students.
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wildenschild, D; Berge, P A; Berryman, K G
1999-01-15
The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less
Energy budgets and resistances to energy transport in sparsely vegetated rangeland
Nichols, W.D.
1992-01-01
Partitioning available energy between plants and bare soil in sparsely vegetated rangelands will allow hydrologists and others to gain a greater understanding of water use by native vegetation, especially phreatophytes. Standard methods of conducting energy budget studies result in measurements of latent and sensible heat fluxes above the plant canopy which therefore include the energy fluxes from both the canopy and the soil. One-dimensional theoretical numerical models have been proposed recently for the partitioning of energy in sparse crops. Bowen ratio and other micrometeorological data collected over phreatophytes growing in areas of shallow ground water in central Nevada were used to evaluate the feasibility of using these models, which are based on surface and within-canopy aerodynamic resistances, to determine heat and water vapor transport in sparsely vegetated rangelands. The models appear to provide reasonably good estimates of sensible heat flux from the soil and latent heat flux from the canopy. Estimates of latent heat flux from the soil were less satisfactory. Sensible heat flux from the canopy was not well predicted by the present resistance formulations. Also, estimates of total above-canopy fluxes were not satisfactory when using a single value for above-canopy bulk aerodynamic resistance. ?? 1992.
Semiotic and Theoretic Control in Argumentation and Proof Activities
ERIC Educational Resources Information Center
Arzarello, Ferdinando; Sabena, Cristina
2011-01-01
We present a model to analyze the students' activities of argumentation and proof in the graphical context of Elementary Calculus. The theoretical background is provided by the integration of Toulmin's structural description of arguments, Peirce's notions of sign, diagrammatic reasoning and abduction, and Habermas' model for rational behavior.…
Changing Concepts in Forensics.
ERIC Educational Resources Information Center
Zarefsky, David
This paper discusses five theoretical concepts in general and two theoretical models in particular that are involved in forensics. The five concepts are: (1) causation, an inquiry into the reasons for ongoing processes or problems; (2) inherency, the division of a universe into its necessary features and its accidental features; (3) presumption, a…
Integrating Relational Reasoning and Knowledge Revision during Reading
ERIC Educational Resources Information Center
Kendeou, Panayiota; Butterfuss, Reese; Van Boekel, Martin; O'Brien, Edward J.
2017-01-01
Our goal in this theoretical contribution is to connect research on knowledge revision and relational reasoning. To achieve this goal, first, we review the "knowledge revision components framework" (KReC) that provides an account of knowledge revision processes, specifically as they unfold during reading of texts. Second, we review a…
Ethical Reasoning: A Heuristic Approach for Business Educators.
ERIC Educational Resources Information Center
Molberg, Diane R.
For the teaching of business report writing, ethical reasoning can be used as a heuristic for thinking that will encourage a more effective communication pattern for business students. Writing processes can be applied to thinking processes to help students approach theoretical concepts, make decisions, and write more effective business reports. A…
Internal Medicine residents use heuristics to estimate disease probability.
Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin
2015-01-01
Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.
2010-01-01
Background Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may - in a more implicit manner - influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Methods Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. Results The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Conclusions Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results. PMID:20158908
Houben, Paul H H; van der Weijden, Trudy; Winkens, Bjorn; Winkens, Ron A G; Grol, Richard P T M
2010-02-16
Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may--in a more implicit manner--influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results.
Tearing mode velocity braking due to resonant magnetic perturbations
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Menmuir, S.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.
2012-10-01
The effect of resonant magnetic perturbations (RMPs) on the tearing mode (TM) velocity is studied in EXTRAP T2R. Experimental results show that the RMP produces TM braking until a new steady velocity or wall locking is reached. The braking is initially localized at the TM resonance and then spreads to the other TMs and to the rest of the plasma producing a global velocity reduction via the viscous torque. The process has been used to experimentally estimate the kinematic viscosity profile, in the range 2-40 m2 s-1, and the electromagnetic torque produced by the RMP, which is strongly localized at the TM resonance. Experimental results are then compared with a theoretical model which gives a reasonable qualitative explanation of the entire process.
Comment on "Propionaldehyde infrared cross-sections and band strengths" by B. Köroğlu et al. [1
NASA Astrophysics Data System (ADS)
Richter, Wagner Eduardo; Bruns, Roy Edward
2016-08-01
The propionaldehyde infrared regional integrated area reported by Köroğlu et al. were re-examined. Even though the spectrum seems to be recorded correctly, the comparison between their values with the data obtained by the integration of the propionaldehyde spectrum available in the PNNL database suggests that a scaling factor of 2.3025 (the ratio between ln and log bases) is the reason for their results being lower than those expected based on other literature values and quantum chemical estimates. Revised values were then reported for the four spectral regions evaluated by these authors, resulting in a much better agreement between both theoretical and experimental results for not only for this molecule but also for others like acetone and acetaldehyde.
Salim, Agus; Mackinnon, Andrew; Christensen, Helen; Griffiths, Kathleen
2008-09-30
The pre-test-post-test design (PPD) is predominant in trials of psychotherapeutic treatments. Missing data due to withdrawals present an even bigger challenge in assessing treatment effectiveness under the PPD than under designs with more observations since dropout implies an absence of information about response to treatment. When confronted with missing data, often it is reasonable to assume that the mechanism underlying missingness is related to observed but not to unobserved outcomes (missing at random, MAR). Previous simulation and theoretical studies have shown that, under MAR, modern techniques such as maximum-likelihood (ML) based methods and multiple imputation (MI) can be used to produce unbiased estimates of treatment effects. In practice, however, ad hoc methods such as last observation carried forward (LOCF) imputation and complete-case (CC) analysis continue to be used. In order to better understand the behaviour of these methods in the PPD, we compare the performance of traditional approaches (LOCF, CC) and theoretically sound techniques (MI, ML), under various MAR mechanisms. We show that the LOCF method is seriously biased and conclude that its use should be abandoned. Complete-case analysis produces unbiased estimates only when the dropout mechanism does not depend on pre-test values even when dropout is related to fixed covariates including treatment group (covariate-dependent: CD). However, CC analysis is generally biased under MAR. The magnitude of the bias is largest when the correlation of post- and pre-test is relatively low.
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.
Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane
de Almeida, Valmor F.; Hart, Kevin J.
2017-01-03
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.; Hart, Kevin J.
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
Analysis of Gas Membrane Ultra-High Purification of Small Quantities of Mono-Isotopic Silane
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.; Hart, Kevin J.
A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similarmore » to silane. Other potential problematic surprises are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. To improve the separation selectivity, it is advantageous to use a permeate chamber under vacuum, however this also requires greater control of in-leakage of impurities in the system. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. Last but not least, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less
The need for international nursing diagnosis research and a theoretical framework.
Lunney, Margaret
2008-01-01
To describe the need for nursing diagnosis research and a theoretical framework for such research. A linguistics theory served as the foundation for the theoretical framework. Reasons for additional nursing diagnosis research are: (a) file names are needed for implementation of electronic health records, (b) international consensus is needed for an international classification, and (c) continuous changes occur in clinical practice. A theoretical framework used by the author is explained. Theoretical frameworks provide support for nursing diagnosis research. Linguistics theory served as an appropriate exemplar theory to support nursing research. Additional nursing diagnosis studies based upon a theoretical framework are needed and linguistics theory can provide an appropriate structure for this research.
A theoretical model to describe progressions and regressions for exercise rehabilitation.
Blanchard, Sam; Glasgow, Phil
2014-08-01
This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrated Media: Toward a Theoretical Framework for Utilizing Their Potential.
ERIC Educational Resources Information Center
Journal of Special Education Technology, 1993
1993-01-01
This article discusses how current theories of learning and memory can guide the application of integrated media (IM) to embellish a standard curriculum; considers theoretical reasons for "breaking the mold"; and offers examples of IM-based alternatives to curricula in the areas of adult literacy, language arts, social studies, language skills,…
Adolescent Egocentrism and Formal Operations: Tests of a Theoretical Assumption.
ERIC Educational Resources Information Center
Lapsley, David K.; And Others
1986-01-01
Describes two studies of the theoretical relation between adolescent egocentrism and formal operations. Study 1 used the Adolescent Egocentrism Scale (AES) and Lunzer's battery of formal reasoning tasks to assess 183 adolescents. Study 2 administered the AES, the Imaginary Audience Scale (IAS), and the Test of Logical Thinking to 138 adolescents.…
NASA Astrophysics Data System (ADS)
Sieroka, Norman
2018-02-01
This paper aims at closing a gap in recent Weyl research by investigating the role played by Leibniz for the development and consolidation of Weyl's notion of theoretical (symbolic) construction. For Weyl, just as for Leibniz, mathematics was not simply an accompanying tool when doing physics-for him it meant the ability to engage in well-guided speculations about a general framework of reality and experience. The paper first introduces some of the background of Weyl's notion of theoretical construction and then discusses particular Leibnizian inheritances in Weyl's 'Philosophie der Mathematik und Naturwissenschaft', such as the general appreciation of the principles of sufficient reason and of continuity. Afterwards the paper focuses on three themes: first, Leibniz's primary quality phenomenalism, which according to Weyl marked the decisive step in realizing that physical qualities are never apprehended directly; second, the conceptual relation between continuity and freedom; and third, Leibniz's notion of 'expression', which allows for a certain type of (surrogative) reasoning by structural analogy and which gave rise to Weyl's optimism regarding the scope of theoretical construction.
Deontic Reasoning with Emotional Content: Evolutionary Psychology or Decision Theory?
ERIC Educational Resources Information Center
Perham, Nick; Oaksford, Mike
2005-01-01
Three experiments investigated the contrasting predictions of the evolutionary and decision-theoretic approaches to deontic reasoning. Two experiments embedded a hazard management (HM) rule in a social contract scenario that should lead to competition between innate modules. A 3rd experiment used a pure HM task. Threatening material was also…
Integrating Turnover Reasons and Shocks with Turnover Decision Processes
ERIC Educational Resources Information Center
Maertz, Carl P., Jr.; Kmitta, Kayla R.
2012-01-01
We interviewed and classified 186 quitters from many jobs and organizations via a theoretically-based protocol into five decision process types. We then tested exploratory hypotheses comparing users of these types on their propensity to report certain turnover reasons and turnover shocks. "Impulsive-type quitters," with neither a job offer in hand…
Reasons and Methods to Learn the Management
ERIC Educational Resources Information Center
Li, Hongxin; Ding, Mengchun
2010-01-01
Reasons for learning the management include (1) perfecting the knowledge structure, (2) the management is the base of all organizations, (3) one person may be the manager or the managed person, (4) the management is absolutely not simple knowledge, and (5) the learning of the theoretical knowledge of the management can not be replaced by the…
ERIC Educational Resources Information Center
Mason, Lucia; Boldrin, Angela; Zurlo, Giovanna
2006-01-01
This article reports a theoretically based study on the model of development of epistemological understanding proposed by Kuhn (2000) [Kuhn, D. (2000). Theory of mind, metacognition, and reasoning: A life-span perspective. In P. Mitchell & K. J. Riggs (Eds.), "Children's reasoning and the mind" (pp. 301-326). Hove, UK: Psychology…
The Development of the Motivation for Critical Reasoning in Online Discussions Inventory (MCRODI)
ERIC Educational Resources Information Center
Zhang, Tianyi; Koehler, Matthew J.; Spatariu, Alexandru
2009-01-01
This study was conducted to develop an inventory that measures students' motivation to engage in critical reasoning in online discussions. Inventory items were developed based on theoretical frameworks and then tested on 168 participants. Using exploratory factor analysis, test-retest reliability, and internal consistency, twenty-two items were…
Direct observation limits on antimatter gravitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischler, Mark; Lykken, Joe; Roberts, Tom
2008-06-01
The proposed Antihydrogen Gravity experiment at Fermilab (P981) will directly measure the gravitational attraction g between antihydrogen and the Earth, with an accuracy of 1% or better. The following key question has been asked by the PAC: Is a possible 1% difference between g and g already ruled out by other evidence? This memo presents the key points of existing evidence, to answer whether such a difference is ruled out (a) on the basis of direct observational evidence; and/or (b) on the basis of indirect evidence, combined with reasoning based on strongly held theoretical assumptions. The bottom line is thatmore » there are no direct observations or measurements of gravitational asymmetry which address the antimatter sector. There is evidence which by indirect reasoning can be taken to rule out such a difference, but the analysis needed to draw that conclusion rests on models and assumptions which are in question for other reasons and are thus worth testing. There is no compelling evidence or theoretical reason to rule out such a difference at the 1% level.« less
Internal Medicine residents use heuristics to estimate disease probability
Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin
2015-01-01
Background Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. Results When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Conclusions Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing. PMID:27004080
The construct-behavior gap in behavioral decision research: A challenge beyond replicability.
Regenwetter, Michel; Robinson, Maria M
2017-10-01
Behavioral decision research compares theoretical constructs like preferences to behavior such as observed choices. Three fairly common links from constructs to behavior are (1) to tally, across participants and decision problems, the number of choices consistent with one predicted pattern of pairwise preferences; (2) to compare what most people choose in each decision problem against a predicted preference pattern; or (3) to enumerate the decision problems in which two experimental conditions generate a 1-sided significant difference in choice frequency 'consistent' with the theory. Although simple, these theoretical links are heuristics. They are subject to well-known reasoning fallacies, most notably the fallacy of sweeping generalization and the fallacy of composition. No amount of replication can alleviate these fallacies. On the contrary, reiterating logically inconsistent theoretical reasoning over and again across studies obfuscates science. As a case in point, we consider pairwise choices among simple lotteries and the hypotheses of overweighting or underweighting of small probabilities, as well as the description-experience gap. We discuss ways to avoid reasoning fallacies in bridging the conceptual gap between hypothetical constructs, such as, for example, "overweighting" to observable pairwise choice data. Although replication is invaluable, successful replication of hard-to-interpret results is not. Behavioral decision research stands to gain much theoretical and empirical clarity by spelling out precise and formally explicit theories of how hypothetical constructs translate into observable behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Autism: a transdiagnostic, dimensional, construct of reasoning?
Aggernaes, Bodil
2018-03-01
The concept of autism has changed across time, from the Bleulerian concept, which defined it as one of several symptoms of dementia praecox, to the present-day concept representing a pervasive development disorder. The present theoretical contribution to this special issue of EJN on autism introduces new theoretical ideas and discusses them in light of selected prior theories, clinical examples, and recent empirical evidence. The overall aim is to identify some present challenges of diagnostic practice and autism research and to suggest new pathways that may help direct future research. Future research must agree on the definitions of core concepts such as autism and psychosis. A possible redefinition of the concept of autism may be a condition in which the rationale of an individual's behaviour differs qualitatively from that of the social environment due to characteristic cognitive impairments affecting reasoning. A broad concept of psychosis could focus on deviances in the experience of reality resulting from impairments of reasoning. In this light and consistent with recent empirical evidence, it may be appropriate to redefine dementia praecox as a developmental disorder of reasoning. A future challenge of autism research may be to develop theoretical models that can account for the impact of complex processes acting at the social level in addition to complex neurobiological and psychological processes. Such models could profit from a distinction among processes related to (i) basic susceptibility, (ii) adaptive processes and (iii) decompensating factors involved in the development of manifest illness. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Context and clinical reasoning : Understanding the medical student perspective.
McBee, Elexis; Ratcliffe, Temple; Schuwirth, Lambert; O'Neill, Daniel; Meyer, Holly; Madden, Shelby J; Durning, Steven J
2018-04-27
Studies have shown that a physician's clinical reasoning performance can be influenced by contextual factors. We explored how the clinical reasoning performance of medical students was impacted by contextual factors in order to expand upon previous findings in resident and board certified physicians. Using situated cognition as the theoretical framework, our aim was to evaluate the verbalized clinical reasoning processes of medical students in order to describe what impact the presence of contextual factors has on their reasoning performance. Seventeen medical student participants viewed three video recordings of clinical encounters portraying straightforward diagnostic cases in internal medicine with explicit contextual factors inserted. Participants completed a computerized post-encounter form as well as a think-aloud protocol. Three authors analyzed verbatim transcripts from the think-aloud protocols using a constant comparative approach. After iterative coding, utterances were analyzed and grouped into categories and themes. Six categories and ten associated themes emerged, which demonstrated overlap with findings from previous studies in resident and attending physicians. Four overlapping categories included emotional disturbances, behavioural inferences about the patient, doctor-patient relationship, and difficulty with closure. Two new categories emerged to include anchoring and misinterpretation of data. The presence of contextual factors appeared to impact clinical reasoning performance in medical students. The data suggest that a contextual factor can be innate to the clinical scenario, consistent with situated cognition theory. These findings build upon our understanding of clinical reasoning performance from both a theoretical and practical perspective.
Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?
Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin
2014-08-01
Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.
ERIC Educational Resources Information Center
He, Wu
2014-01-01
Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…
Complex Impedance of Fast Optical Transition Edge Sensors up to 30 MHz
NASA Astrophysics Data System (ADS)
Hattori, K.; Kobayashi, R.; Numata, T.; Inoue, S.; Fukuda, D.
2018-03-01
Optical transition edge sensors (TESs) are characterized by a very fast response, of the order of μs, which is 10^3 times faster than TESs for X-ray and gamma-ray. To extract important parameters associated with the optical TES, complex impedances at high frequencies (> 1 MHz) need to be measured, where the parasitic impedance in the circuit and reflections of electrical signals due to discontinuities in the characteristic impedance of the readout circuits become significant. This prevents the measurements of the current sensitivity β , which can be extracted from the complex impedance. In usual setups, it is hard to build a circuit model taking into account the parasitic impedances and reflections. In this study, we present an alternative method to estimate a transfer function without investigating the details of the entire circuit. Based on this method, the complex impedance up to 30 MHz was measured. The parameters were extracted from the impedance and were compared with other measurements. Using these parameters, we calculated the theoretical limit on an energy resolution and compared it with the measured energy resolution. In this paper, the reasons for the deviation of the measured value from theoretically predicted values will be discussed.
NASA Astrophysics Data System (ADS)
Ling, C. C.; Shek, Y. F.; Huang, A. P.; Fung, S.; Beling, C. D.
1999-02-01
Positron-lifetime spectroscopy has been used to investigate the electric-field distribution occurring at the Au-semi-insulating GaAs interface. Positrons implanted from a 22Na source and drifted back to the interface are detected through their characteristic lifetime at interface traps. The relative intensity of this fraction of interface-trapped positrons reveals that the field strength in the depletion region saturates at applied biases above 50 V, an observation that cannot be reconciled with a simple depletion approximation model. The data, are, however, shown to be fully consistent with recent direct electric-field measurements and the theoretical model proposed by McGregor et al. [J. Appl. Phys. 75, 7910 (1994)] of an enhanced EL2+ electron-capture cross section above a critical electric field that causes a dramatic reduction of the depletion region's net charge density. Two theoretically derived electric field profiles, together with an experimentally based profile, are used to estimate a positron mobility of ~95+/-35 cm2 V-1 s-1 under the saturation field. This value is higher than previous experiments would suggest, and reasons for this effect are discussed.
Ultrasonic Investigations on Polonides of Ba, Ca, and Pb
NASA Astrophysics Data System (ADS)
Singh, Devraj; Bhalla, Vyoma; Bala, Jyoti; Wadhwa, Shikha
2017-10-01
The temperature-dependent mechanical and ultrasonic properties of barium, calcium, and lead polonides (BaPo, CaPo, and PbPo) were investigated in the temperature range 100-300 K. The second- and third-order elastic constants (SOECs and TOECs) were computed using Coulomb and Born-Mayer potential and these in turn have been used to estimate other secondary elastic properties such as strength, anisotropy, microhardness, etc. The theoretical approach followed the prediction that BaPo, CaPo, and PbPo are brittle in nature. PbPo is found to be the hardest amongst the chosen compounds. Further the SOECs and TOECs are applied to determine ultrasonic velocities, Debye temperature, and acoustic coupling constants along <100>, <110>, and <111> orientations at room temperature. Additionally thermal conductivity has been computed using Morelli and Slack's approach along different crystallographic directions at room temperature. Finally ultrasonic attenuation due to phonon-phonon interaction and thermoelastic relaxation mechanisms has been computed for BaPo, CaPo, and PbPo. The behaviour of these compounds is similar to that of semi-metals with thermal relaxation time of the order 10-11 s. The present computation study is reasonably in agreement with the available theoretical data for the similar type of materials.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Leaper, Campbell
2011-01-01
Many contemporary theories of social development are similar and/or share complementary constructs. Yet, there have been relatively few efforts toward theoretical integration. The present chapter represents a call for increased theory bridging. The problem of theoretical fragmentation in psychology is reviewed. Seven highlighted reasons for this predicament include differences between behavioral sciences and other sciences, theoretical paradigms as social identities, the uniqueness assumption, information overload, field fixation, linguistic fragmentation, and few incentives for theoretical integration. Afterward, the feasibility of theoretical synthesis is considered. Finally, some possible directions are proposed for theoretical integration among five contemporary theories of social and gender development: social cognitive theory, expectancy-value theory, cognitive-developmental theory, gender schema theory, and self-categorization theory.
Experimental Determination of the Permeability in the Lacunar-Canalicular Porosity of Bone
Gailani, Gaffar; Benalla, Mohammed; Mahamud, Rashal; Cowin, Stephen C.; Cardoso, Luis
2010-01-01
Permeability of the mineralized bone tissue is a critical element in understanding fluid flow occurring in the lacunar-canalicular porosity (PLC) compartment of bone and its role in bone nutrition and mechanotransduction. However, the estimation of bone permeability at the tissue level is affected by the influence of the vascular porosity (PV) in macroscopic samples containing several osteons. In this communication, both analytical and experimental approaches are proposed to estimate the lacunar-canalicular permeability in a single osteon. Data from an experimental stress-relaxation test in a single osteon is used to derive the PLC permeability by curve fitting to theoretical results from a compressible transverse isotropic poroelastic model of a porous annular disk under a ramp loading history (Cowin and Mehrabadi 2007; Gailani and Cowin 2008). The PLC tissue intrinsic permeability in the radial direction of the osteon was found to be dependent on the strain rate used and within the range of O(10−24)−O(10−25). The reported values of PLC permeability are in reasonable agreement with previously reported values derived using FEA and nanoindentation approaches. PMID:19831477
Predictive Model and Software for Inbreeding-Purging Analysis of Pedigreed Populations
García-Dorado, Aurora; Wang, Jinliang; López-Cortegano, Eugenio
2016-01-01
The inbreeding depression of fitness traits can be a major threat to the survival of populations experiencing inbreeding. However, its accurate prediction requires taking into account the genetic purging induced by inbreeding, which can be achieved using a “purged inbreeding coefficient”. We have developed a method to compute purged inbreeding at the individual level in pedigreed populations with overlapping generations. Furthermore, we derive the inbreeding depression slope for individual logarithmic fitness, which is larger than that for the logarithm of the population fitness average. In addition, we provide a new software, PURGd, based on these theoretical results that allows analyzing pedigree data to detect purging, and to estimate the purging coefficient, which is the parameter necessary to predict the joint consequences of inbreeding and purging. The software also calculates the purged inbreeding coefficient for each individual, as well as standard and ancestral inbreeding. Analysis of simulation data show that this software produces reasonably accurate estimates for the inbreeding depression rate and for the purging coefficient that are useful for predictive purposes. PMID:27605515
Potential benefits of remote sensing: Theoretical framework and empirical estimate
NASA Technical Reports Server (NTRS)
Eisgruber, L. M.
1972-01-01
A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.
Anderson, Katherine H.; Bartlein, Patrick J.; Strickland, Laura E.; Pelltier, Richard T.; Thompson, Robert S.; Shafer, Sarah L.
2012-01-01
The mutual climatic range (MCR) technique is perhaps the most widely used method for estimating past climatic parameters from fossil assemblages, largely because it can be conducted on a simple list of the taxa present in an assemblage. When applied to plant macrofossil data, this unweighted approach (MCRun) will frequently identify a large range for a given climatic parameter where the species in an assemblage can theoretically live together. To narrow this range, we devised a new weighted approach (MCRwt) that employs information from the modern relations between climatic parameters and plant distributions to lessen the influence of the "tails" of the distributions of the climatic data associated with the taxa in an assemblage. To assess the performance of the MCR approaches, we applied them to a set of modern climatic data and plant distributions on a 25-km grid for North America, and compared observed and estimated climatic values for each grid point. In general, MCRwt was superior to MCRun in providing smaller anomalies, less bias, and better correlations between observed and estimated values. However, by the same measures, the results of Modern Analog Technique (MAT) approaches were superior to MCRwt. Although this might be reason to favor MAT approaches, they are based on assumptions that may not be valid for paleoclimatic reconstructions, including that: 1) the absence of a taxon from a fossil sample is meaningful, 2) plant associations were largely unaffected by past changes in either levels of atmospheric carbon dioxide or in the seasonal distributions of solar radiation, and 3) plant associations of the past are adequately represented on the modern landscape. To illustrate the application of these MCR and MAT approaches to paleoclimatic reconstructions, we applied them to a Pleistocene paleobotanical assemblage from the western United States. From our examinations of the estimates of modern and past climates from vegetation assemblages, we conclude that the MCRun technique provides reliable and unbiased estimates of the ranges of possible climatic conditions that can reasonably be associated with these assemblages. The application of MCRwt and MAT approaches can further constrain these estimates and may provide a systematic way to assess uncertainty. The data sets required for MCR analyses in North America are provided in a parallel publication.
Ling, Gilbert N.
1970-01-01
A theoretical equation is presented for the control of cooperative adsorption on proteins and other linear macromolecules by hormones, drugs, ATP, and other „cardinal adsorbents.” With reasonable accuracy, this equation describes quantitatively the control of oxygen binding to hemoglobin by 2,3-diphosphoglycerate and by inosine hexaphosphate. PMID:5272319
ERIC Educational Resources Information Center
Arsenio, William F.; Gold, Jason
2006-01-01
Our goal in this paper is to examine the potential origins of children's understanding of morally relevant transgressions, with a particular focus on how children's perceptions of both proximal and distal unfairness might influence their social reasoning and behavior. A preliminary theoretical model is presented that addresses connections among…
ERIC Educational Resources Information Center
Roh, Young-Ran
2000-01-01
Explores theoretical foundation for integrated approach to moral education; discusses rational choice and moral action within human reflective structure; investigates moral values required for integrative approach to moral education; discusses content of moral motivation, including role of emotion and reason. (Contains 15 references.) (PKP)
ERIC Educational Resources Information Center
Rooney, Pauline
2012-01-01
It is widely acknowledged that digital games can provide an engaging, motivating and "fun" experience for students. However an entertaining game does not necessarily constitute a meaningful, valuable learning experience. For this reason, experts espouse the importance of underpinning serious games with a sound theoretical framework which…
Brain Imaging, Forward Inference, and Theories of Reasoning
Heit, Evan
2015-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926
Brain imaging, forward inference, and theories of reasoning.
Heit, Evan
2014-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.
ERIC Educational Resources Information Center
Simonneaux, Laurence; Simonneaux, Jean
2009-01-01
In this article, we study third-year university students' reasoning about three controversial socio-scientific issues from the viewpoint of education for sustainable development: local issues (the reintroduction of bears in the Pyrenees in France, wolves in the Mercantour) and a global one (global warming). We used the theoretical frameworks of…
ERIC Educational Resources Information Center
Silverman, David
2007-01-01
In this book, the author shows how good research can be methodologically inventive, empirically rigorous, theoretically-alive and practically relevant. Using materials ranging from photographs to novels and newspaper stories this book demonstrates that getting to grips with these issues means asking fundamental questions about how we are…
Reasons Given by High School Students for Refusing Sexually Transmitted Disease Screening
ERIC Educational Resources Information Center
Sanders, Ladatra S.; Nsuami, Malanda; Cropley, Lorelei D.; Taylor, Stephanie N.
2007-01-01
Objective: To determine reasons given by high school students for refusing to participate in a school-based noninvasive chlamydia and gonorrhea screening that was offered at no cost to students, using the health belief model as theoretical framework. Design: Cross-sectional survey. Setting: Public high schools in a southern urban United States…
Impact of Geological Changes on Regional and Global Economies
NASA Astrophysics Data System (ADS)
Tatiana, Skufina; Peter, Skuf'in; Vera, Samarina; Taisiya, Shatalova; Baranov, Sergey
2017-04-01
Periods of geological changes such as super continent cycle (300-500 million years), Wilson's cycles (300-900 million years), magmatic-tectonic cycle (150-200 million years), and cycles with smaller periods (22, 100, 1000 years) lead to a basic contradiction preventing forming methodology of the study of impact of geological changes on the global and regional economies. The reason of this contradiction is the differences of theoretical and methodological aspects of the Earth science and economics such as different time scales and accuracy of geological changes. At the present the geological models cannot provide accurate estimation of time and place where geological changes (strong earthquakes, volcanos) are expected. Places of feature (not next) catastrophic events are the only thing we have known. Thus, it is impossible to use the periodicity to estimate both geological changes and their consequences. Taking into accounts these factors we suggested a collection of concepts for estimating impact of possible geological changes on regional and global economies. We illustrated our approach by example of estimating impact of Tohoku earthquake and tsunami of March 2011 on regional and global economies. Based on this example we concluded that globalization processes increase an impact of geological changes on regional and global levels. The research is supported by Russian Foundation for Basic Research (Projects No. 16-06-00056, 16-32-00019, 16-05-00263A).
Interplanetary double-shock ensembles with anomalous electrical conductivity
NASA Technical Reports Server (NTRS)
Dryer, M.
1972-01-01
Similarity theory is applied to the case of constant velocity, piston-driven, shock waves. This family of solutions, incorporating the interplanetary magnetic field for the case of infinite electric conductivity, represents one class of experimentally observed, flare-generated shock waves. This paper discusses the theoretical extension to flows with finite conductivity (presumably caused by unspecified modes of wave-particle interactions). Solutions, including reverse shocks, are found for a wide range of magnetic Reynolds numbers from one to infinity. Consideration of a zero and nonzero ambient flowing solar wind (together with removal of magnetic considerations) enables the recovery of earlier similarity solutions as well as numerical simulations. A limited comparison with observations suggests that flare energetics can be reasonably estimated once the shock velocity, ambient solar wind velocity and density, and ambient azimuthal Alfven Mach number are known.
Further study of inversion layer MIS solar cells
NASA Technical Reports Server (NTRS)
Ho, Fat Duen
1992-01-01
Many inversion layer metal-insulator-semiconductor (IL/MIS) solar cells have been fabricated. As of today, the best cell fabricated by us has a 9.138 percent AMO efficiency, with FF = 0.641, V(sub OC) = 0.557 V, and I(sub SC) = 26.9 micro A. Efforts made for fabricating an IL/MOS solar cell with reasonable efficiencies are reported. The more accurate control of the thickness of the thin layer of oxide between aluminum and silicon of the MIS contacts has been achieved by using two different process methods. Comparison of these two different thin oxide processings is reported. The effects of annealing time of the sample are discussed. The range of the resistivity of the substrates used in the IL cell fabrication is experimentally estimated. Theoretical study of the MIS contacts under dark conditions is addressed.
Grubb, Anders; Horio, Masaru; Hansson, Lars-Olof; Björk, Jonas; Nyman, Ulf; Flodin, Mats; Larsson, Anders; Bökenkamp, Arend; Yasuda, Yoshinari; Blufpand, Hester; Lindström, Veronica; Zegers, Ingrid; Althaus, Harald; Blirup-Jensen, Søren; Itoh, Yoshi; Sjöström, Per; Nordin, Gunnar; Christensson, Anders; Klima, Horst; Sunde, Kathrin; Hjort-Christensen, Per; Armbruster, David; Ferrero, Carlo
2014-07-01
Many different cystatin C-based equations exist for estimating glomerular filtration rate. Major reasons for this are the previous lack of an international cystatin C calibrator and the nonequivalence of results from different cystatin C assays. Use of the recently introduced certified reference material, ERM-DA471/IFCC, and further work to achieve high agreement and equivalence of 7 commercially available cystatin C assays allowed a substantial decrease of the CV of the assays, as defined by their performance in an external quality assessment for clinical laboratory investigations. By use of 2 of these assays and a population of 4690 subjects, with large subpopulations of children and Asian and Caucasian adults, with their GFR determined by either renal or plasma inulin clearance or plasma iohexol clearance, we attempted to produce a virtually assay-independent simple cystatin C-based equation for estimation of GFR. We developed a simple cystatin C-based equation for estimation of GFR comprising only 2 variables, cystatin C concentration and age. No terms for race and sex are required for optimal diagnostic performance. The equation, [Formula: see text] is also biologically oriented, with 1 term for the theoretical renal clearance of small molecules and 1 constant for extrarenal clearance of cystatin C. A virtually assay-independent simple cystatin C-based and biologically oriented equation for estimation of GFR, without terms for sex and race, was produced. © 2014 The American Association for Clinical Chemistry.
NASA Astrophysics Data System (ADS)
Campbell, E. E.; Oliveira, J. D. C.; Lamparelli, R.; Soares, J.; Monteiro, L. A.; Jaiswal, D.; Sheehan, J. J.; Figueiredo, G. K. D. A.; Lynd, L. R.
2017-12-01
Accessing the changes in net primary production (NPP) from grassland in the globe has important applications, e.g. can identify where land have been degraded or in opposite site have been intensified. The aim of this study is to identify the changes occurred in grassland production due management practices and climate change. A recent comparison between a theoretical model of aboveground NPP and satellite data will be performed for the years 2000 to 2003. The theoretical model links NPP to climate, defined as total annual rainfall. Satellite data will use the total annual NPP from MODIS sensor (MOD17A3), that each pixel (spatial resolution of 1 km) include biome type information, daily meteorological data and the fraction absorbed of photosynthetic active radiation (FPAR) and leaf area index (LAI). Both NPP results were set in pastureland that is occupied by ruminants based on year 2000. The correlation between total NPP's values on year 2000 was 0.77. Therefore, the change in the differences between these models can reflect management practices and climate change impacts on grassland biomass production and also the reasonability of using both databases for predicting yield gap. The different from both NPP estimates will be then classified in three groups: no significant difference, significant increase and significant decrease. The outcome results will show the fluctuations in biomass from grassland worldwide. The regions with ongoing pasture degradation will be indentified, and can suggest a need for improvement. In the other hand, pastureland with significant increase in biomass will offer an example of intensification potential. The tendency of the pastureland in each region can give a support for policy makers in order to achieve a sustainability use of the land. Financial Support: FAPESP process 2017/06037-4, 2016/08741-8, 2017/08970-0, 2016/08742-4 and 2014/26767-9
A case-based assistant for clinical psychiatry expertise.
Bichindaritz, I
1994-01-01
Case-based reasoning is an artificial intelligence methodology for the processing of empirical knowledge. Recent case-based reasoning systems also use theoretic knowledge about the domain to constrain the case-based reasoning. The organization of the memory is the key issue in case-based reasoning. The case-based assistant presented here has two structures in memory: cases and concepts. These memory structures permit it to be as skilled in problem-solving tasks, such as diagnosis and treatment planning, as in interpretive tasks, such as clinical research. A prototype applied to clinical work about eating disorders in psychiatry, reasoning from the alimentary questionnaires of these patients, is presented as an example of the system abilities.
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
Orban, Kristina; Ekelin, Maria; Edgren, Gudrun; Sandgren, Olof; Hovbrandt, Pia; Persson, Eva K
2017-09-11
Outcome- or competency-based education is well established in medical and health sciences education. Curricula are based on courses where students develop their competences and assessment is also usually course-based. Clinical reasoning is an important competence, and the aim of this study was to monitor and describe students' progression in professional clinical reasoning skills during health sciences education using observations of group discussions following the case method. In this qualitative study students from three different health education programmes were observed while discussing clinical cases in a modified Harvard case method session. A rubric with four dimensions - problem-solving process, disciplinary knowledge, character of discussion and communication - was used as an observational tool to identify clinical reasoning. A deductive content analysis was performed. The results revealed the students' transition over time from reasoning based strictly on theoretical knowledge to reasoning ability characterized by clinical considerations and experiences. Students who were approaching the end of their education immediately identified the most important problem and then focused on this in their discussion. Practice knowledge increased over time, which was seen as progression in the use of professional language, concepts, terms and the use of prior clinical experience. The character of the discussion evolved from theoretical considerations early in the education to clinical reasoning in later years. Communication within the groups was supportive and conducted with a professional tone. Our observations revealed progression in several aspects of students' clinical reasoning skills on a group level in their discussions of clinical cases. We suggest that the case method can be a useful tool in assessing quality in health sciences education.
Connor, Kevin; Magee, Brian
2014-10-01
This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimation of Melting Points of Organics.
Yalkowsky, Samuel H; Alantary, Doaa
2018-05-01
Unified physicochemical property estimation relationships is a system of empirical and theoretical relationships that relate 20 physicochemical properties of organic molecules to each other and to chemical structure. Melting point is a key parameter in the unified physicochemical property estimation relationships scheme because it is a determinant of several other properties including vapor pressure, and solubility. This review describes the first-principals calculation of the melting points of organic compounds from structure. The calculation is based on the fact that the melting point, T m , is equal to the ratio of the heat of melting, ΔH m , to the entropy of melting, ΔS m . The heat of melting is shown to be an additive constitutive property. However, the entropy of melting is not entirely group additive. It is primarily dependent on molecular geometry, including parameters which reflect the degree of restriction of molecular motion in the crystal to that of the liquid. Symmetry, eccentricity, chirality, flexibility, and hydrogen bonding, each affect molecular freedom in different ways and thus make different contributions to the total entropy of fusion. The relationships of these entropy determining parameters to chemical structure are used to develop a reasonably accurate means of predicting the melting points over 2000 compounds. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Brumback, Babette A; Cai, Zhuangyu; Dailey, Amy B
2014-05-15
Reasons for health disparities may include neighborhood-level factors, such as availability of health services, social norms, and environmental determinants, as well as individual-level factors. Investigating health inequalities using nationally or locally representative data often requires an approach that can accommodate a complex sampling design, in which individuals have unequal probabilities of selection into the study. The goal of the present article is to review and compare methods of estimating or accounting for neighborhood influences with complex survey data. We considered 3 types of methods, each generalized for use with complex survey data: ordinary regression, conditional likelihood regression, and generalized linear mixed-model regression. The relative strengths and weaknesses of each method differ from one study to another; we provide an overview of the advantages and disadvantages of each method theoretically, in terms of the nature of the estimable associations and the plausibility of the assumptions required for validity, and also practically, via a simulation study and 2 epidemiologic data analyses. The first analysis addresses determinants of repeat mammography screening use using data from the 2005 National Health Interview Survey. The second analysis addresses disparities in preventive oral health care using data from the 2008 Florida Behavioral Risk Factor Surveillance System Survey.
Verifying reddening and extinction for Gaia DR1 TGAS main sequence stars
NASA Astrophysics Data System (ADS)
Gontcharov, George A.; Mosenkov, Aleksandr V.
2017-12-01
We compare eight sources of reddening and extinction estimates for approximately 60 000 Gaia DR1 Tycho-Gaia Astrometric Solution (TGAS) main sequence stars younger than 3 Gyr with a relative error of the Gaia parallax less than 0.1. For the majority of the stars, the best 2D dust emission-based reddening maps show considerable differences between the reddening to infinity and the one calculated to the stellar distance using the barometric law of the dust distribution. This proves that the majority of the TGAS stars are embedded in the Galactic dust layer and a proper 3D treatment of the reddening/extinction is required to calculate their dereddened colours and absolute magnitudes reliably. Sources with 3D estimates of reddening are tested in their ability to put the stars among the PARSEC and MIST theoretical isochrones in the Hertzsprung-Russell diagram based on the precise Gaia, Tycho-2, 2MASS and WISE photometry. Only the reddening/extinction estimates by Arenou et al. and Gontcharov, being appropriate for nearby stars within 280 pc, provide both the minimal number of outliers bluer than any reasonable isochrone and the correct number of stars younger than 3 Gyr in agreement with the Besançon Galaxy model.
Assssment and Mapping of the Riverine Hydrokinetic Resource in the Continental United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobson, Paul T.; Ravens, Thomas M.; Cunningham, Keith W.
2012-12-14
The U.S. Department of Energy (DOE) funded the Electric Power Research Institute and its collaborative partners, University of Alaska ? Anchorage, University of Alaska ? Fairbanks, and the National Renewable Energy Laboratory, to provide an assessment of the riverine hydrokinetic resource in the continental United States. The assessment benefited from input obtained during two workshops attended by individuals with relevant expertise and from a National Research Council panel commissioned by DOE to provide guidance to this and other concurrent, DOE-funded assessments of water based renewable energy. These sources of expertise provided valuable advice regarding data sources and assessment methodology. Themore » assessment of the hydrokinetic resource in the 48 contiguous states is derived from spatially-explicit data contained in NHDPlus ?a GIS-based database containing river segment-specific information on discharge characteristics and channel slope. 71,398 river segments with mean annual flow greater than 1,000 cubic feet per second (cfs) mean discharge were included in the assessment. Segments with discharge less than 1,000 cfs were dropped from the assessment, as were river segments with hydroelectric dams. The results for the theoretical and technical resource in the 48 contiguous states were found to be relatively insensitive to the cutoff chosen. Raising the cutoff to 1,500 cfs had no effect on estimate of the technically recoverable resource, and the theoretical resource was reduced by 5.3%. The segment-specific theoretical resource was estimated from these data using the standard hydrological engineering equation that relates theoretical hydraulic power (Pth, Watts) to discharge (Q, m3 s-1) and hydraulic head or change in elevation (??, m) over the length of the segment, where ? is the specific weight of water (9800 N m-3): ??? = ? ? ?? For Alaska, which is not encompassed by NPDPlus, hydraulic head and discharge data were manually obtained from Idaho National Laboratory?s Virtual Hydropower Prospector, Google Earth, and U.S. Geological Survey gages. Data were manually obtained for the eleven largest rivers with average flow rates greater than 10,000 cfs and the resulting estimate of the theoretical resource was expanded to include rivers with discharge between 1,000 cfs and 10,000 cfs based upon the contribution of rivers in the latter flow class to the total estimate in the contiguous 48 states. Segment-specific theoretical resource was aggregated by major hydrologic region in the contiguous, lower 48 states and totaled 1,146 TWh/yr. The aggregate estimate of the Alaska theoretical resource is 235 TWh/yr, yielding a total theoretical resource estimate of 1,381 TWh/yr for the continental US. The technically recoverable resource in the contiguous 48 states was estimated by applying a recovery factor to the segment-specific theoretical resource estimates. The recovery factor scales the theoretical resource for a given segment to take into account assumptions such as minimum required water velocity and depth during low flow conditions, maximum device packing density, device efficiency, and flow statistics (e.g., the 5 percentile flow relative to the average flow rate). The recovery factor also takes account of ?back effects? ? feedback effects of turbine presence on hydraulic head and velocity. The recovery factor was determined over a range of flow rates and slopes using the hydraulic model, HEC-RAS. In the hydraulic modeling, presence of turbines was accounted for by adjusting the Manning coefficient. This analysis, which included 32 scenarios, led to an empirical function relating recovery factor to slope and discharge. Sixty-nine percent of NHDPlus segments included in the theoretical resource estimate for the contiguous 48 states had an estimated recovery factor of zero. For Alaska, data on river slope was not readily available; hence, the recovery factor was estimated based on the flow rate alone. Segment-specific estimates of the theoretical resource were multiplied by the corresponding recovery factor to estimate the technically recoverable resource. The resulting technically recoverable resource estimate for the continental United States is 120 TWh/yr.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogorelov, A. A.; Suslov, I. M.
2008-06-15
New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less
Problem-based learning: effects on student’s scientific reasoning skills in science
NASA Astrophysics Data System (ADS)
Wulandari, F. E.; Shofiyah, N.
2018-04-01
This research aimed to develop instructional package of problem-based learning to enhance student’s scientific reasoning from concrete to formal reasoning skills level. The instructional package was developed using the Dick and Carey Model. Subject of this study was instructional package of problem-based learning which was consisting of lesson plan, handout, student’s worksheet, and scientific reasoning test. The instructional package was tried out on 4th semester science education students of Universitas Muhammadiyah Sidoarjo by using the one-group pre-test post-test design. The data of scientific reasoning skills was collected by making use of the test. The findings showed that the developed instructional package reflecting problem-based learning was feasible to be implemented in classroom. Furthermore, through applying the problem-based learning, students could dominate formal scientific reasoning skills in terms of functionality and proportional reasoning, control variables, and theoretical reasoning.
On the methods for determining the transverse dispersion coefficient in river mixing
NASA Astrophysics Data System (ADS)
Baek, Kyong Oh; Seo, Il Won
2016-04-01
In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.
InChIKey collision resistance: an experimental testing
2012-01-01
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications. We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body. From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations. PMID:23256896
InChIKey collision resistance: an experimental testing.
Pletnev, Igor; Erin, Andrey; McNaught, Alan; Blinov, Kirill; Tchekhovskoi, Dmitrii; Heller, Steve
2012-12-20
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications.We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body.From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations.
Understanding the amplitudes of noise correlation measurements
Tsai, Victor C.
2011-01-01
Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.
ERIC Educational Resources Information Center
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa
2015-01-01
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
ERIC Educational Resources Information Center
Luckett, Kathy
2016-01-01
This is a theoretical paper that addresses the challenge of educational access to the Humanities and Social Sciences. It plots a theoretical quest to develop an explicit pedagogy to give "disadvantaged" students in the Humanities ways of working successfully with texts. In doing so it draws on Bernstein, Moore and Maton's work to…
Syntactic levels, lexicalism, and ellipsis: The jury is still out.
Hartsuiker, Robert J; Bernolet, Sarah
2017-01-01
Structural priming data are sometimes compatible with several theoretical views, as shown here for three key theoretical claims. One reason is that prime sentences affect multiple representational levels driving syntactic choice. Additionally, priming is affected by further cognitive functions (e.g., memory). We therefore see priming as a useful tool for the investigation of linguistic representation but not the only tool.
NASA Astrophysics Data System (ADS)
Jupri, Al
2017-04-01
In this article we address how Realistic Mathematics Education (RME) principles, including the intertwinement and the reality principles, are used to analyze geometry tasks. To do so, we carried out three phases of a small-scale study. First we analyzed four geometry problems - considered as tasks inviting the use of problem solving and reasoning skills - theoretically in the light of the RME principles. Second, we tested two problems to 31 undergraduate students of mathematics education program and other two problems to 16 master students of primary mathematics education program. Finally, we analyzed student written work and compared these empirical to the theoretical results. We found that there are discrepancies between what we expected theoretically and what occurred empirically in terms of mathematization and of intertwinement of mathematical concepts from geometry to algebra and vice versa. We conclude that the RME principles provide a fruitful framework for analyzing geometry tasks that, for instance, are intended for assessing student problem solving and reasoning skills.
RPA Field Simulations:Dilemma Training for Legal and Ethical Decision Making
2015-11-07
Simulation Two phases in RPA Field Simulation – classroom phase and field phase Purpose: link theoretical understanding/ moral reasoning with...rapid, informed decision-making/ moral behavior IRREGULAR WARFARE U.S. dominates conventional warfare, but irregular warfare falls under Things...aspects: Mental simulation of action Modify Implement Will it work? MORAL REASONING/BEHAVIOR Military-Leader Responsibility requires
CARA: Cognitive Architecture for Reasoning About Adversaries
2012-01-20
synthesis approach taken here the KIDS principle (Keep It Descriptive, Stupid ) applies, and agents and organizations are profiled in great detail...developed two algorithms to make forecasts about adversarial behavior. We developed game-theoretical approaches to reason about group behavior. We...to automatically make forecasts about group behavior together with methods to quantify the uncertainty inherent in such forecasts; • Developed
An Analysis of Categorical and Quantitative Methods for Planning Under Uncertainty
Langlotz, Curtis P.; Shortliffe, Edward H.
1988-01-01
Decision theory and logical reasoning are both methods for representing and solving medical decision problems. We analyze the usefulness of these two approaches to medical therapy planning by establishing a simple correspondence between decision theory and non-monotonic logic, a formalization of categorical logical reasoning. The analysis indicates that categorical approaches to planning can be viewed as comprising two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of desirability of planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of categorical (nonmonotonic) reasoning: (1) Decision theory and artificial intelligence techniques are intended to solve different components of the planning problem. (2) When considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical logical reasoning for planning under certainty. (3) Because certain nonmonotonic programming paradigms (e.g., frame-based inheritance, rule-based planning, protocol-based reminders) are inherently problem-specific, they may be inappropriate to employ in the solution of certain types of planning problems. We discuss how these conclusions affect several current medical informatics research issues, including the construction of “very large” medical knowledge bases.
ERIC Educational Resources Information Center
Hedeker, Donald; And Others
1996-01-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…
Balzan, Ryan; Delfabbro, Paul; Galletly, Cherrie; Woodward, Todd
2012-01-01
Hypersalience of evidence-hypothesis matches has recently been proposed as the cognitive mechanism responsible for the cognitive biases which, in turn, may contribute to the formation and maintenance of delusions. However, the construct lacks empirical support. The current paper investigates the possibility that individuals with delusions are hypersalient to evidence-hypothesis matches using a series of cognitive tasks designed to elicit the representativeness and availability reasoning heuristics. It was hypothesised that hypersalience of evidence-hypothesis matches may increase a person's propensity to rely on judgements of representativeness (i.e., when the probability of an outcome is based on its similarity with its parent population) and availability (i.e., estimates of frequency based on the ease with which relevant events come to mind). A total of 75 participants (25 diagnosed with schizophrenia with a history of delusions; 25 nonclinical delusion-prone; 25 nondelusion-prone controls) completed four heuristics tasks based on the original Tversky and Kahnemann experiments. These included two representativeness tasks ("coin-toss" random sequence task; "lawyer-engineer" base-rates task) and two availability tasks ("famous-names" and "letter-frequency" tasks). The results across these four heuristics tasks showed that participants with schizophrenia were more susceptible than nonclinical groups to both the representativeness and availability reasoning heuristics. These results suggest that delusional ideation is linked to a hypersalience of evidence-hypothesis matches. The theoretical implications of this cognitive mechanism on the formation and maintenance of delusions are discussed.
Silk, Kami J; Weiner, Judith; Parrott, Roxanne L
2005-12-01
Genetically modified (GM) foods are currently a controversial topic about which the lay public in the United States knows little. Formative research has demonstrated that the lay public is uncertain and concerned about GM foods. This study (N = 858) extends focus group research by using the Theory of Reasoned Action (TRA) to examine attitudes and subjective norms related to GM foods as a theoretical strategy for audience segmentation. A hierarchical cluster analysis revealed four unique audiences based on their attitude and subjective norm toward GM foods (ambivalent-biotech, antibiotech, biotech-normer, and biotech individual). Results are discussed in terms of the theoretical and practical significance for audience segmentation.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
NASA Astrophysics Data System (ADS)
Simonneaux, Laurence; Simonneaux, Jean
2009-09-01
In this article, we study third-year university students' reasoning about three controversial socio-scientific issues from the viewpoint of education for sustainable development: local issues (the reintroduction of bears in the Pyrenees in France, wolves in the Mercantour) and a global one (global warming). We used the theoretical frameworks of social representations and of socio-scientific reasoning. Students' reasoning varies according to the issues, in particular because of their emotional proximity with the issues and their socio-cultural origin. About this kind of issues, it seems pertinent to integrate into the operations of socio-scientific reasoning not only the consideration of values, but also the analysis of the modes of governance and the place given to politics.
Assessment of Energy Production Potential from Ocean Currents along the United States Coastline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Kevin
Increasing energy consumption and depleting reserves of fossil fuels have resulted in growing interest in alternative renewable energy from the ocean. Ocean currents are an alternative source of clean energy due to their inherent reliability, persistence and sustainability. General ocean circulations exist in the form of large rotating ocean gyres, and feature extremely rapid current flow in the western boundaries due to the Coriolis Effect. The Gulf Stream system is formed by the western boundary current of the North Atlantic Ocean that flows along the east coastline of the United States, and therefore is of particular interest as a potentialmore » energy resource for the United States. This project created a national database of ocean current energy resources to help advance awareness and market penetration in ocean current energy resource assessment. The database, consisting of joint velocity magnitude and direction probability histograms, was created from data created by seven years of numerical model simulations. The accuracy of the database was evaluated by ORNL?s independent validation effort documented in a separate report. Estimates of the total theoretical power resource contained in the ocean currents were calculated utilizing two separate approaches. Firstly, the theoretical energy balance in the Gulf Stream system was examined using the two-dimensional ocean circulation equations based on the assumptions of the Stommel model for subtropical gyres with the quasi-geostrophic balance between pressure gradient, Coriolis force, wind stress and friction driving the circulation. Parameters including water depth, natural dissipation rate and wind stress are calibrated in the model so that the model can reproduce reasonable flow properties including volume flux and energy flux. To represent flow dissipation due to turbines additional turbine drag coefficient is formulated and included in the model. Secondly, to determine the reasonableness of the total power estimates from the Stommel model and to help determine the size and capacity of arrays necessary to extract the maximum theoretical power, further estimates of the available power based on the distribution of the kinetic power density in the undisturbed flow was completed. This used estimates of the device spacing and scaling to sum up the total power that the devices would produce. The analysis has shown that considering extraction over a region comprised of the Florida Current portion of the Gulf Stream system, the average power dissipated ranges between 4-6 GW with a mean around 5.1 GW. This corresponds to an average of approximately 45 TWh/yr. However, if the extraction area comprises the entire portion of the Gulf Stream within 200 miles of the US coastline from Florida to North Carolina, the average power dissipated becomes 18.6 GW or 163 TWh/yr. A web based GIS interface, http://www.oceancurrentpower.gatech.edu/, was developed for dissemination of the data. The website includes GIS layers of monthly and yearly mean ocean current velocity and power density for ocean currents along the entire coastline of the United States, as well as joint and marginal probability histograms for current velocities at a horizontal resolution of 4-7 km with 10-25 bins over depth. Various tools are provided for viewing, identifying, filtering and downloading the data.« less
ERIC Educational Resources Information Center
Wartman, Katherine Lynk, Ed.; Savage, Marjorie, Ed.
2008-01-01
This monograph is divided into three main sections: theoretical grounding, student identity, and implications. The first section, theoretical grounding of parental involvement, looks at the reasons parents today are more likely to be involved in their students' lives and then reviews the literature of K-12 education and compares that information…
Phase stabilization of multidimensional amplification architectures for ultrashort pulses
NASA Astrophysics Data System (ADS)
Müller, M.; Kienel, M.; Klenke, A.; Eidam, T.; Limpert, J.; Tünnermann, A.
2015-03-01
The active phase stabilization of spatially and temporally combined ultrashort pulses is investigated theoretically and experimentally. Particularly, considering a combining scheme applying 2 amplifier channels and 4 divided-pulse replicas a bistable behavior is observed. The reason is mutual influence of the optical error signals that is intrinsic to temporal polarization beam combining. A successful mitigation strategy is proposed and is analyzed theoretically and experimentally.
National Strategic Planning: Linking DIMEFIL/PMESII to a Theory of Victory
2009-04-01
theoretical and one practical, and both are interlinked, The theoretical problem is the lack of a mental framework tying the desired end state...mental framework tying the desired end state (usually broadly stated) to the activities undertaken with the instruments of national power. This is a... FRAMEWORK TO DIMEFIL/PMESII ............ 39 CHAPTER 4. HOLY GRAIL OR WITCHES’ BREW? RECORDING REASONING IN SOFTWARE
Using AberOWL for fast and scalable reasoning over BioPortal ontologies.
Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert
2016-08-08
Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.
Dropping Out of High School: An Application of the Theory of Reasoned Action.
ERIC Educational Resources Information Center
Prestholdt, Perry H.; Fisher, Jack L.
To develop and test a theoretical model, based on the Theory of Reasoned Action (Fishbein and Ajzen, 1975), for understanding and predicting the decision to stay in or drop out of school, to identify the specific beliefs that are the basis of that decision, and to evaluate the use of moderator variables (sex, race) to individualize the model,…
ERIC Educational Resources Information Center
Carr, David
2014-01-01
If we reject sentimentalist accounts of the nature of moral motivation and education, then we may regard some form of reason as intrinsic to any genuine moral response. The large question for moral education is therefore that of the nature of such reason--perhaps more especially of its status as knowledge. In this regard, there is evidence of some…
Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.
NASA Astrophysics Data System (ADS)
Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.
2006-01-01
This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.
Modelling Continuing Load at Disaggregated Levels
ERIC Educational Resources Information Center
Seidel, Ewa
2014-01-01
The current methodology of estimating load in the following year at Flinders University has achieved reasonable accuracy in the previous capped funding environment, particularly at the university level, due largely to our university having stable intakes and student profiles. While historically within reasonable limits, variation in estimates at…
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
The development of scientific reasoning in medical education: a psychological perspective.
Barz, Daniela Luminita; Achimaş-Cadariu, Andrei
2016-01-01
Scientific reasoning has been studied from a variety of theoretical perspectives, which have tried to identify the underlying mechanisms responsible for the development of this particular cognitive process. Scientific reasoning has been defined as a problem-solving process that involves critical thinking in relation to content, procedural, and epistemic knowledge. The development of scientific reasoning in medical education was influenced by current paradigmatic trends, it could be traced along educational curriculum and followed cognitive processes. The purpose of the present review is to discuss the role of scientific reasoning in medical education and outline educational methods for its development. Current evidence suggests that medical education should foster a new ways of development of scientific reasoning, which include exploration of the complexity of scientific inquiry, and also take into consideration the heterogeneity of clinical cases found in practice.
Quantum Structure in Cognition and the Foundations of Human Reasoning
NASA Astrophysics Data System (ADS)
Aerts, Diederik; Sozzo, Sandro; Veloz, Tomas
2015-12-01
Traditional cognitive science rests on a foundation of classical logic and probability theory. This foundation has been seriously challenged by several findings in experimental psychology on human decision making. Meanwhile, the formalism of quantum theory has provided an efficient resource for modeling these classically problematical situations. In this paper, we start from our successful quantum-theoretic approach to the modeling of concept combinations to formulate a unifying explanatory hypothesis. In it, human reasoning is the superposition of two processes - a conceptual reasoning, whose nature is emergence of new conceptuality, and a logical reasoning, founded on an algebraic calculus of the logical type. In most cognitive processes however, the former reasoning prevails over the latter. In this perspective, the observed deviations from classical logical reasoning should not be interpreted as biases but, rather, as natural expressions of emergence in its deepest form.
Markovits, Henry
2014-12-01
Understanding the development of conditional (if-then) reasoning is critical for theoretical and educational reasons. Here we examined the hypothesis that there is a developmental transition between reasoning with true and contrary-to-fact (CF) causal conditionals. A total of 535 students between 11 and 14 years of age received priming conditions designed to encourage use of either a true or CF alternatives generation strategy and reasoning problems with true causal and CF causal premises (with counterbalanced order). Results show that priming had no effect on reasoning with true causal premises. By contrast, priming with CF alternatives significantly improved logical reasoning with CF premises. Analysis of the effect of order showed that reasoning with CF premises reduced logical responding among younger students but had no effect among older students. Results support the idea that there is a transition in the reasoning processes in this age range associated with the nature of the alternatives generation process required for logical reasoning with true and CF causal conditionals. Copyright © 2014 Elsevier Inc. All rights reserved.
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
NASA Technical Reports Server (NTRS)
Seidel, A. D.
1974-01-01
The economic value of information produced by an assumed operational version of an earth resources survey satellite of the ERTS class is assessed. The theoretical capability of an ERTS system to provide improved agricultural forecasts is analyzed and this analysis is used as a reasonable input to the econometric methods derived by ECON. An econometric investigation into the markets for agricultural commodities is summarized. An overview of the effort including the objectives, scopes, and architecture of the analysis, and the estimation strategy employed is presented. The results and conclusions focus on the economic importance of improved crop forecasts, U.S. exports, and government policy operations. Several promising avenues of further investigation are suggested.
Modelling the growth of triglycine sulphate crystals in Spacelab 3
NASA Technical Reports Server (NTRS)
Yoo, Hak-Do; Wilcox, William R.; Lal, Ravindra; Trolinger, James D.
1988-01-01
Two triglycine sulphate crystals were grown from an aqueous solution in Spacelab 3 aboard a Space Shuttle. Using a diffusion coefficient of 0.00002 sq cm/s, a computerized simulation gave reasonable agreement between experimental and theoretical crystal sizes and interferometric lines in the solution near the growing crystal. This diffusion coefficient is larger than most measured values, possibly due to fluctuating accelerations on the order of .001 g (Earth's gravity). The average acceleration was estimated to be less than .000001 g. At this level, buoyancy driven convection is predicted to add approx. 20 percent to the steady state growth rate. Only very slight distortion of the interferometric lines was observed at the end of a 33 hr run. It is suggested that the time to reach steady state convective transport may be inversely proportional to g at low g, so that the full effect of convection was not realized in these experiments.
Simple and universal model for electron-impact ionization of complex biomolecules
NASA Astrophysics Data System (ADS)
Tan, Hong Qi; Mi, Zhaohong; Bettiol, Andrew A.
2018-03-01
We present a simple and universal approach to calculate the total ionization cross section (TICS) for electron impact ionization in DNA bases and other biomaterials in the condensed phase. Evaluating the electron impact TICS plays a vital role in ion-beam radiobiology simulation at the cellular level, as secondary electrons are the main cause of DNA damage in particle cancer therapy. Our method is based on extending the dielectric formalism. The calculated results agree well with experimental data and show a good comparison with other theoretical calculations. This method only requires information of the chemical composition and density and an estimate of the mean binding energy to produce reasonably accurate TICS of complex biomolecules. Because of its simplicity and great predictive effectiveness, this method could be helpful in situations where the experimental TICS data are absent or scarce, such as in particle cancer therapy.
Kernodle, John Michael
1981-01-01
A two-dimensional ground-water flow model of the Eutaw-McShan and Gordo aquifers in the area of Lee County, Miss., was successfully calibrated and verified using data from six long-term observation wells and two intensive studies of areal water levels. The water levels computed by the model were found to be most sensitive to changes in simulated aquifer hydraulic conductivity and to changes in head in the overlying Coffee Sand aquifer. The two-dimensional model performed reasonably well in simulating the aquifer system except possibly in southern Lee County and southward where a clay bed at the top of the Gordo Formation partially isolated the Gordo from the overlying Eutaw-McShan aquifer. The verified model was used to determine theoretical aquifer response to increased ground-water withdrawal to the year 2000. Two estimated rates of increase and five possible well field locations were examined. (USGS)
NASA Astrophysics Data System (ADS)
Dorofeeva, Olga V.; Suchkova, Taisiya A.
2018-04-01
The gas-phase enthalpies of formation of four molecules with high flexibility, which leads to the existence of a large number of low-energy conformers, were calculated with the G4 method to see whether the lowest energy conformer is sufficient to achieve high accuracy in the computed values. The calculated values were in good agreement with the experiment, whereas adding the correction for conformer distribution makes the agreement worse. The reason for this effect is a large anharmonicity of low-frequency torsional motions, which is ignored in the calculation of ZPVE and thermal enthalpy. It was shown that the approximate correction for anharmonicity estimated using a free rotor model is of very similar magnitude compared with the conformer correction but has the opposite sign, and thus almost fully compensates for it. Therefore, the common practice of adding only the conformer correction is not without problems.
Byron, O
1997-01-01
Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627
Application of a CO2 dial system for infrared detection of forest fire and reduction of false alarm
NASA Astrophysics Data System (ADS)
Bellecci, C.; Francucci, M.; Gaudio, P.; Gelfusa, M.; Martellucci, S.; Richetta, M.; Lo Feudo, T.
2007-04-01
Forest fires can be the cause of serious environmental and economic damages. For this reason considerable effort has been directed toward forest protection and fire fighting. The means traditionally used for early fire detection mainly consist in human observers dispersed over forest regions. A significant improvement in early warning capabilities could be obtained by using automatic detection apparatus. In order to early detect small forest fires and minimize false alarms, the use of a lidar system and dial technique will be considered. A first evaluation of the lowest detectable concentration will be estimated by numerical simulation. The theoretical model will also be used to get the capability of the dial system to control wooded areas. Fixing the burning rate for several fuels, the maximum range of detection will be evaluated. Finally results of simulations will be reported.
Discovering Parameters for Ancient Mars Atmospheric Profiles by Modeling Volcanic Eruptions
NASA Astrophysics Data System (ADS)
Meyer, A.; Clarke, A. B.; Van Eaton, A. R.; Mastin, L. G.
2017-12-01
Evidence of explosive volcanic deposits on Mars motivates questions about the behavior of eruption plumes in the Ancient and current Martian atmosphere. Early modeling studies suggested that Martian plumes may rise significantly higher than their terrestrial equivalents (Wilson and Head, 1994, Rev. Geophys., 32, 221-263). We revisit the issue using a steady-state 1-D model of volcanic plumes (Plumeria: Mastin, 2014, JGR, doi:10.1002/2013JD020604) along with a range of reasonable temperature and pressures. The model assumes perfect coupling of particles with the gas phase in the plume, and Stokes number analysis indicates that this is a reasonable assumption for particle diameters less than 5 mm to 1 micron. Our estimates of Knudsen numbers support the continuum assumption. The tested atmospheric profiles include an estimate of current Martian atmosphere based on data from voyager mission (Seif, A., Kirk, D.B., (1977) Geophys., 82,4364-4378), a modern Earth-like atmosphere, and several other scenarios based on variable tropopause heights and near-surface atmospheric density estimates from the literature. We simulated plume heights using mass eruption rates (MER) ranging from 1 x 103 to 1 x 1010 kg s-1 to create a series of new theoretical MER-plume height scaling relationships that may be useful for considering plume injection heights, climate impacts, and global-scale ash dispersal patterns in Mars' recent and ancient geological past. Our results show that volcanic plumes in a modern Martian atmosphere may rise up to three times higher than those on Earth. We also find that the modern Mars atmosphere does not allow eruption columns to collapse, and thus does not allow for the formation of column-collapse pyroclastic density currents, a phenomenon thought to have occurred in Mars' past based on geological observations. The atmospheric density at the surface, and especially the height of the tropopause, affect the slope of the MER-plume height curve and control whether or not column-collapse is possible.
Caravita, Simona C. S.; Giardino, Simona; Lenzi, Leonardo; Salvaterra, Mariaelena; Antonietti, Alessandro
2012-01-01
Neuroscientific and psychological research on moral development has until now developed independently, referring to distinct theoretical models, contents, and methods. In particular, the influence of socio-economic and cultural factors on morality has been broadly investigated by psychologists but as yet has not been investigated by neuroscientists. The value of bridging these two areas both theoretically and methodologically has, however, been suggested. This study aims at providing a first connection between neuroscientific and psychological literature on morality by investigating whether socio-economic dimensions, i.e., living socio-geographic/economic area, immigrant status and socio-economic status (SES), affect moral reasoning as operationalized in moral domain theory (a seminal approach in psychological studies on morality) and in Greene et al. (2001) perspective (one of the main approaches in neuroethics research). Participants were 81 primary school (M = 8.98 years; SD = 0.39), 72 middle school (M = 12.14 years; SD = 0.61), and 73 high school (M = 15.10 years; SD = 0.38) students from rural and urban areas. Participants' immigrant status (native vs. immigrant) and family SES level were recorded. Moral reasoning was assessed by means of a series of personal and impersonal dilemmas based on Greene et al. (2001) neuroimaging experiment and a series of moral and socio-conventional rule dilemmas based on the moral domain theory. Living socio-geographic/economic area, immigrant status and SES mainly affected evaluations of moral and, to a higher extent, socio-conventional dilemmas, but had no impact on judgment of personal and impersonal dilemmas. Results are mainly discussed from the angle of possible theoretical links and suggestions emerging for studies on moral reasoning in the frameworks of neuroscience and psychology. PMID:23015787
Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds
NASA Technical Reports Server (NTRS)
Carlson, H. W.; Mack, R. J.
1980-01-01
A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.
NASA Astrophysics Data System (ADS)
da Silva, Jorge Alberto Valle; Modesto-Costa, Lucas; de Koning, Martijn C.; Borges, Itamar; França, Tanos Celmar Costa
2018-01-01
In this work, quaternary and non-quaternary oximes designed to bind at the peripheral site of acetylcholinesterase previously inhibited by organophosphates were investigated theoretically. Some of those oximes have a large number of degrees of freedom, thus requiring an accurate method to obtain molecular geometries. For this reason, the density functional theory (DFT) was employed to refine their molecular geometries after conformational analysis and to compare their 1H and 13C nuclear magnetic resonance (NMR) theoretical signals in gas-phase and in solvent. A good agreement with experimental data was achieved and the same theoretical approach was employed to obtain the geometries in water environment for further studies.
Reasons Why Young Women Accept or Decline Fertility Preservation After Cancer Diagnosis.
Hershberger, Patricia E; Sipsma, Heather; Finnegan, Lorna; Hirshfeld-Cytron, Jennifer
2016-01-01
To understand young women's reasons for accepting or declining fertility preservation after cancer diagnosis to aid in the development of theory regarding decision making in this context. Qualitative descriptive. Participants' homes or other private location. Twenty-seven young women (mean age, 29 years) diagnosed with cancer and eligible for fertility preservation. Recruitment was conducted via the Internet and in fertility centers. Participants completed demographic questionnaires and in-depth semi-structured interviews. Tenets of grounded theory guided an inductive and deductive analysis. Young women's reasons for deciding whether to undergo fertility preservation were linked to four theoretical dimensions: Cognitive Appraisals, Emotional Responses, Moral Judgments, and Decision Partners. Women who declined fertility preservation described more reasons in the Cognitive Appraisals dimension, including financial cost and human risks, than women who accepted. In the Emotional Responses dimension, most women who accepted fertility preservation reported a strong desire for biological motherhood, whereas women who declined tended to report a strong desire for surviving cancer. Three participants who declined reported reasons linked to the Moral Judgments dimension, and most participants were influenced by Decision Partners, including husbands, boyfriends, parents, and clinicians. The primary reason on which many but not all participants based decisions related to fertility preservation was whether the immediate emphasis of care should be placed on surviving cancer or securing options for future biological motherhood. Nurses and other clinicians should base education and counseling on the four theoretical dimensions to effectively support young women with cancer. Copyright © 2016 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Reasons Why Young Women Accept or Decline Fertility Preservation Following Cancer Diagnosis
Hershberger, Patricia E.; Sipsma, Heather; Finnegan, Lorna; Hirshfeld-Cytron, Jennifer
2015-01-01
Objective To understand young women’s reasons for accepting or declining fertility preservation following cancer diagnosis to aid in the development of theory regarding decision making in this context. Design Qualitative descriptive. Setting Participants’ homes or other private location. Participants Twenty-seven young women (mean age = 29 years) diagnosed with cancer and eligible for fertility preservation. Methods Recruitment was conducted via the Internet and in fertility centers. Participants completed demographic questionnaires and in-depth semi-structured interviews. Tenets of grounded theory guided an inductive and deductive analysis. Results Young women’s reasons for deciding whether to undergo fertility preservation were linked to four theoretical dimensions: Cognitive Appraisals, Emotional Responses, Moral Judgments, and Decision Partners. Women who declined fertility preservation described more reasons in the Cognitive Appraisals dimension, including financial cost and human risks, than women who accepted. In the Emotional Responses dimension, most women who accepted fertility preservation reported a strong desire for biological motherhood, whereas women who declined tended to report a strong desire for surviving cancer. Three participants who declined reported reasons linked to the Moral Judgments dimension, and the majority were influenced by Decision Partners, including husbands, boyfriends, parents, and clinicians. Conclusion The primary reason upon which many but not all participants based decisions related to fertility preservation was whether the immediate emphasis of care should be placed on surviving cancer or securing options for future biological motherhood. Nurses and other clinicians should base education and counseling on the four theoretical dimensions to effectively support young women with cancer. PMID:26815806
Empirical resistive-force theory for slender biological filaments in shear-thinning fluids
NASA Astrophysics Data System (ADS)
Riley, Emily E.; Lauga, Eric
2017-06-01
Many cells exploit the bending or rotation of flagellar filaments in order to self-propel in viscous fluids. While appropriate theoretical modeling is available to capture flagella locomotion in simple, Newtonian fluids, formidable computations are required to address theoretically their locomotion in complex, nonlinear fluids, e.g., mucus. Based on experimental measurements for the motion of rigid rods in non-Newtonian fluids and on the classical Carreau fluid model, we propose empirical extensions of the classical Newtonian resistive-force theory to model the waving of slender filaments in non-Newtonian fluids. By assuming the flow near the flagellum to be locally Newtonian, we propose a self-consistent way to estimate the typical shear rate in the fluid, which we then use to construct correction factors to the Newtonian local drag coefficients. The resulting non-Newtonian resistive-force theory, while empirical, is consistent with the Newtonian limit, and with the experiments. We then use our models to address waving locomotion in non-Newtonian fluids and show that the resulting swimming speeds are systematically lowered, a result which we are able to capture asymptotically and to interpret physically. An application of the models to recent experimental results on the locomotion of Caenorhabditis elegans in polymeric solutions shows reasonable agreement and thus captures the main physics of swimming in shear-thinning fluids.
The Sonic Altimeter for Aircraft
NASA Technical Reports Server (NTRS)
Draper, C S
1937-01-01
Discussed here are results already achieved with sonic altimeters in light of the theoretical possibilities of such instruments. From the information gained in this investigation, a procedure is outlined to determine whether or not a further development program is justified by the value of the sonic altimeter as an aircraft instrument. The information available in the literature is reviewed and condensed into a summary of sonic altimeter developments. Various methods of receiving the echo and timing the interval between the signal and the echo are considered. A theoretical discussion is given of sonic altimeter errors due to uncertainties in timing, variations in sound velocity, aircraft speed, location of the sending and receiving units, and inclinations of the flight path with respect to the ground surface. Plots are included which summarize the results in each case. An analysis is given of the effect of an inclined flight path on the frequency of the echo. A brief study of the acoustical phases of the sonic altimeter problem is carried through. The results of this analysis are used to predict approximately the maximum operating altitudes of a reasonably designed sonic altimeter under very good and very bad conditions. A final comparison is made between the estimated and experimental maximum operating altitudes which shows good agreement where quantitative information is available.
Experimental Evidence on Iterated Reasoning in Games
Grehl, Sascha; Tutić, Andreas
2015-01-01
We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486
Soft sensor for real-time cement fineness estimation.
Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir
2015-03-01
This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Babaie Mahani, A.; Eaton, D. W.
2013-12-01
Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).
Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young
2016-04-01
Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.
Hayes, Brett K; Heit, Evan
2018-05-01
Inductive reasoning entails using existing knowledge to make predictions about novel cases. The first part of this review summarizes key inductive phenomena and critically evaluates theories of induction. We highlight recent theoretical advances, with a special emphasis on the structured statistical approach, the importance of sampling assumptions in Bayesian models, and connectionist modeling. A number of new research directions in this field are identified including comparisons of inductive and deductive reasoning, the identification of common core processes in induction and memory tasks and induction involving category uncertainty. The implications of induction research for areas as diverse as complex decision-making and fear generalization are discussed. This article is categorized under: Psychology > Reasoning and Decision Making Psychology > Learning. © 2017 Wiley Periodicals, Inc.
On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
Thermal refraction focusing in planar index-antiguided lasers.
Casperson, Lee W; Dittli, Adam; Her, Tsing-Hua
2013-03-15
Thermal refraction focusing in planar index-antiguided lasers is investigated both theoretically and experimentally. An analytical model based on zero-field approximation is presented for treating the combined effects of index antiguiding and thermal focusing. At very low pumping power, the mode is antiguided by the amplifier boundary, whereas at high pumping power it narrows due to thermal focusing. Theoretical results are in reasonable agreement with experimental data.
Theoretical foundations for information representation and constraint specification
NASA Technical Reports Server (NTRS)
Menzel, Christopher P.; Mayer, Richard J.
1991-01-01
Research accomplished at the Knowledge Based Systems Laboratory of the Department of Industrial Engineering at Texas A&M University is described. Outlined here are the theoretical foundations necessary to construct a Neutral Information Representation Scheme (NIRS), which will allow for automated data transfer and translation between model languages, procedural programming languages, database languages, transaction and process languages, and knowledge representation and reasoning control languages for information system specification.
NASA Astrophysics Data System (ADS)
Belkić, Dževad; Mančev, Ivan; Milojevićb, Nenad
2013-09-01
The total cross sections for the various processes for Li3+-He collisions at intermediate-to-high impact energies are compared with the corresponding theories. The possible reasons for the discrepancies among various theoretical predictions are thoroughly discussed. Special attention has been paid to single and double electron capture, simultaneous transfer and ionization, as well as to single and double ionization.
Theoretical study on the mechanism of the gas-phase elimination kinetics of alkyl chloroformates
NASA Astrophysics Data System (ADS)
Alcázar, Jackson J.; Marquez, Edgar; Mora, José R.; Cordova-Sintjago, Tania; Chuchani, Gabriel
2016-03-01
The theoretical calculations on the mechanism of the homogeneous and unimolecular gas-phase elimination kinetics of alkyl chloroformates- ethyl chloroformate (ECF), isopropyl chloroformate (ICF), and sec-butyl chloroformate (SCF) - have been carried out by using CBS-QB3 level of theory and density functional theory (DFT) functionals CAM-B3LYP, M06, MPW1PW91, and PBE1PBE with the basis sets 6-311++G(d,p) and 6-311++G(2d,2p). The chlorofomate compounds with alkyl ester Cβ-H bond undergo thermal decomposition producing the corresponding olefin, HCl and CO2. These homogeneous eliminations are proposed to undergo two different types of mechanisms: a concerted process, or via the formation of an unstable intermediate chloroformic acid (ClCOOH), which rapidly decomposes to HCl and CO2 gas. Since both elimination mechanisms may occur through a six-membered cyclic transition state structure, it is difficult to elucidate experimentally which is the most reasonable reaction mechanism. Theoretical calculations show that the stepwise mechanism with the formation of the unstable intermediate chloroformic acid from ECF, ICF, and SCF is favoured over one-step elimination. Reasonable agreements were found between theoretical and experimental values at the CAM-B3LYP/6-311++G(d,p) level.
Differentiating between precursor and control variables when analyzing reasoned action theories.
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; Diclemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A; Carey, Michael P; Salazar, Laura
2010-02-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model.
Differentiating Between Precursor and Control Variables When Analyzing Reasoned Action Theories
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; DiClemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A.; Carey, Michael P.; Salazar, Laura
2010-01-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model PMID:19370408
Individual differences in conflict detection during reasoning.
Frey, Darren; Johnson, Eric D; De Neys, Wim
2018-05-01
Decades of reasoning and decision-making research have established that human judgment is often biased by intuitive heuristics. Recent "error" or bias detection studies have focused on reasoners' abilities to detect whether their heuristic answer conflicts with logical or probabilistic principles. A key open question is whether there are individual differences in this bias detection efficiency. Here we present three studies in which co-registration of different error detection measures (confidence, response time and confidence response time) allowed us to assess bias detection sensitivity at the individual participant level in a range of reasoning tasks. The results indicate that although most individuals show robust bias detection, as indexed by increased latencies and decreased confidence, there is a subgroup of reasoners who consistently fail to do so. We discuss theoretical and practical implications for the field.
Estimation of relative permeability and capillary pressure from mass imbibition experiments
NASA Astrophysics Data System (ADS)
Alyafei, Nayef; Blunt, Martin J.
2018-05-01
We perform spontaneous imbibition experiments on three carbonates - Estaillades, Ketton, and Portland - which are three quarry limestones that have very different pore structures and span wide range of permeability. We measure the mass of water imbibed in air saturated cores as a function of time under strongly water-wet conditions. Specifically, we perform co-current spontaneous experiments using a highly sensitive balance to measure the mass imbibed as a function of time for the three rocks. We use cores measuring 37 mm in diameter and three lengths of approximately 76 mm, 204 mm, and 290 mm. We show that the amount imbibed scales as the square root of time and find the parameter C, where the volume imbibed per unit cross-sectional area at time t is Ct1/2. We find higher C values for higher permeability rocks. Employing semi-analytical solutions for one-dimensional flow and using reasonable estimates of relative permeability and capillary pressure, we can match the experimental data. We finally discuss how, in combination with conventional measurements, we can use theoretical solutions and imbibition measurements to find or constrain relative permeability and capillary pressure.
Powdthavee, Nattavudh; Lekfuangfu, Warn N.; Wooden, Mark
2017-01-01
Many economists and educators favour public support for education on the premise that education improves the overall quality of life of citizens. However, little is known about the different pathways through which education shapes people’s satisfaction with life overall. One reason for this is because previous studies have traditionally analysed the effect of education on life satisfaction using single-equation models that ignore interrelationships between different theoretical explanatory variables. In order to advance our understanding of how education may be related to overall quality of life, the current study estimates a structural equation model using nationally representative data for Australia to obtain the direct and indirect associations between education and life satisfaction through five different adult outcomes: income, employment, marriage, children, and health. Although we find the estimated direct (or net) effect of education on life satisfaction to be negative and statistically significant in Australia, the total indirect effect is positive, sizeable and statistically significant for both men and women. This implies that misleading conclusions regarding the influence of education on life satisfaction might be obtained if only single-equation models were used in the analysis. PMID:28713668
Walder, J.S.
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Technical Reports Server (NTRS)
Andrews, E. H., Jr.; Mackley, E. A.
1976-01-01
An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.
Osiurak, François
2014-06-01
Our understanding of human tool use comes mainly from neuropsychology, particularly from patients with apraxia or action disorganization syndrome. However, there is no integrative, theoretical framework explaining what these neuropsychological syndromes tell us about the cognitive/neural bases of human tool use. The goal of the present article is to fill this gap, by providing a theoretical framework for the study of human tool use: The Four Constraints Theory (4CT). This theory rests on two basic assumptions. First, everyday tool use activities can be formalized as multiple problem situations consisted of four distinct constraints (mechanics, space, time, and effort). Second, each of these constraints can be solved by the means of a specific process (technical reasoning, semantic reasoning, working memory, and simulation-based decision-making, respectively). Besides presenting neuropsychological evidence for 4CT, this article shall address epistemological, theoretical and methodological issues I will attempt to resolve. This article will discuss how 4CT diverges from current cognitive models about several widespread hypotheses (e.g., notion of routine, direct and automatic activation of tool knowledge, simulation-based tool knowledge).
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Wixted, John T; Mickes, Laura
2018-01-01
Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.
An Embodied and Intersubjective Practice of Occupational Therapy.
Arntzen, Cathrine
2017-08-01
The literature on clinical reasoning tends to ignore the context and the interaction between patient and therapist. This article outlines a theoretical foundation for an extended mode of clinical reasoning in occupational therapy. Cognitive theories of human action, as well as narrative and instrumental approaches, provide an insufficient picture of the nature of clinical reasoning in occupational therapy practice. An embodied intersubjective clinical reasoning can function as an adjunct to traditional clinical reasoning in occupational therapy practice and is discussed through the concepts of the ambiguous body, incorporation of things, and the process of shared meaning-making. This mode of reasoning can help occupational therapy practitioners to be aware of how they influence the patient's perception of body, self, and world. It can promote a better understanding of details in embodied performances and in the co-construction of meaning, positively influencing occupation, participation, and health.
Focal role of tolerability and reasonableness in the radiological protection system.
Schneider, T; Lochard, J; Vaillant, L
2016-06-01
The concepts of tolerability and reasonableness are at the core of the International Commission on Radiological Protection (ICRP) system of radiological protection. Tolerability allows the definition of boundaries for implementing ICRP principles, while reasonableness contributes to decisions regarding adequate levels of protection, taking into account the prevailing circumstances. In the 1970s and 1980s, attempts to find theoretical foundations in risk comparisons for tolerability and cost-benefit analysis for reasonableness failed. In practice, the search for a rational basis for these concepts will never end. Making a wise decision will always remain a matter of judgement and will depend on the circumstances as well as the current knowledge and past experience. This paper discusses the constituents of tolerability and reasonableness at the heart of the radiological protection system. It also emphasises the increasing role of stakeholder engagement in the quest for tolerability and reasonableness since Publication 103. © The International Society for Prosthetics and Orthotics.
Unsteady Thick Airfoil Aerodynamics: Experiments, Computation, and Theory
NASA Technical Reports Server (NTRS)
Strangfeld, C.; Rumsey, C. L.; Mueller-Vahl, H.; Greenblatt, D.; Nayeri, C. N.; Paschereit, C. O.
2015-01-01
An experimental, computational and theoretical investigation was carried out to study the aerodynamic loads acting on a relatively thick NACA 0018 airfoil when subjected to pitching and surging, individually and synchronously. Both pre-stall and post-stall angles of attack were considered. Experiments were carried out in a dedicated unsteady wind tunnel, with large surge amplitudes, and airfoil loads were estimated by means of unsteady surface mounted pressure measurements. Theoretical predictions were based on Theodorsen's and Isaacs' results as well as on the relatively recent generalizations of van der Wall. Both two- and three-dimensional computations were performed on structured grids employing unsteady Reynolds-averaged Navier-Stokes (URANS). For pure surging at pre-stall angles of attack, the correspondence between experiments and theory was satisfactory; this served as a validation of Isaacs theory. Discrepancies were traced to dynamic trailing-edge separation, even at low angles of attack. Excellent correspondence was found between experiments and theory for airfoil pitching as well as combined pitching and surging; the latter appears to be the first clear validation of van der Wall's theoretical results. Although qualitatively similar to experiment at low angles of attack, two-dimensional URANS computations yielded notable errors in the unsteady load effects of pitching, surging and their synchronous combination. The main reason is believed to be that the URANS equations do not resolve wake vorticity (explicitly modeled in the theory) or the resulting rolled-up un- steady flow structures because high values of eddy viscosity tend to \\smear" the wake. At post-stall angles, three-dimensional computations illustrated the importance of modeling the tunnel side walls.
Measured, modeled, and causal conceptions of fitness
Abrams, Marshall
2012-01-01
This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804
Jeong, Yoo-Seong; Yim, Chang-Soon; Ryu, Heon-Min; Noh, Chi-Kyoung; Song, Yoo-Kyung; Chung, Suk-Jae
2017-06-01
The objective of the current study was to determine the minimum permeability coefficient, P, needed for perfusion-limited distribution in PBPK. Two expanded kinetic models, containing both permeability and perfusion terms for the rate of tissue distribution, were considered: The resulting equations could be simplified to perfusion-limited distribution depending on tissue permeability. Integration plot analyses were carried out with theophylline in 11 typical tissues to determine their apparent distributional clearances and the model-dependent permeabilities of the tissues. Effective surface areas were calculated for 11 tissues from the tissue permeabilities of theophylline and its PAMPA P. Tissue permeabilities of other drugs were then estimated from their PAMPA P and the effective surface area of the tissues. The differences between the observed and predicted concentrations, as expressed by the sum of squared log differences with the present models were at least comparable to or less than the values obtained using the traditional perfusion-limited distribution model for 24 compounds with diverse PAMPA P values. These observations suggest that the use of a combination of the proposed models, PAMPA P and the effective surface area can be used to reasonably predict the pharmacokinetics of 22 out of 24 model compounds, and is potentially applicable to calculating the kinetics for other drugs. Assuming that the fractional distribution parameter of 80% of the perfusion rate is a reasonable threshold for perfusion-limited distribution in PBPK, our theoretical prediction indicates that the pharmacokinetics of drugs having an apparent PAMPA P of 1×10 -6 cm/s or more will follow the traditional perfusion-limited distribution in PBPK for major tissues in the body. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.
2011-10-01
The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.
1999-01-01
Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate. Based on this observation, in this study we have developed a method to estimate the mesoscale-average rain rate over land utilizing microwave radiometer data. Because of the high degree of geographic and seasonal variability in the nature and intensity of rain, this method requires some tuning with 15-minute rain gauge data on land. After tuning the method, it can be applied to an independent set of rain events that are close in time and space. We find that the mesoscale rain rates retrieved over the period of a month on land with this method shows a correlation of about 0.85 with respect to the surface rain-gauge observations. This mesoscale-average rain rate estimation method can be useful to extend the spatial and temporal coverage of the rainfall data provided by the Precipitation Radar on board the Tropical Rainfall Measuring Mission (TRMM) satellite.
An Application of the Reasoned Action Approach to Bystander Intervention for Sexual Assault.
Lukacena, Kaylee M; Reynolds-Tylus, Tobias; Quick, Brian L
2017-10-25
The high prevalence of sexual assault in US college campuses has led to a widespread implementation of bystander intervention programs aimed at preventing sexual assault. The current study examines predictors of college students' intentions to engage in bystander intervention through the theoretical lens of the reasoned action approach. An online survey with college students (N = 186) was conducted at a large Midwestern university. Our results indicated experiential attitudes, instrumental attitudes, descriptive norms, autonomy, and capacity, each positively associated with participants' intentions to intervene to stop a sexual assault. Against expectations, injunctive norms were unrelated to bystander intervention intentions. Finally, in addition to these main effects, an experiential attitude by autonomy interaction was also observed. The results are discussed with a focus on the theoretical and practical implications of our findings.
Logic Programming as an Inference Engine for Non-Monotonic Reasoning
1991-11-11
Mathematical Sciences . ... University of Texas at El Paso AdI!ar, El Pazo , TX 79968-0514 [ A , (teodor math.ep.utexas.edu) Dist November 11, 1991 Title...Przymusinska, L. Pereira and D.S. Warren. Significant progress has been made towards both theoretical and algorithmic foundations of a non-monotonic...reasoning system based on logic programming. An implementation of such a system, limited to circumscrip- tive thoories, has been also completed. 14
Hoggart, Lesley
2018-05-21
This paper scrutinises the concepts of moral reasoning and personal reasoning, problematising the binary model by looking at young women's pregnancy decision-making. Data from two UK empirical studies are subjected to theoretically driven qualitative secondary analysis, and illustrative cases show how complex decision-making is characterised by an intertwining of the personal and the moral, and is thus best understood by drawing on moral relativism.
Patterns of informal reasoning in the context of socioscientific decision making
NASA Astrophysics Data System (ADS)
Sadler, Troy D.; Zeidler, Dana L.
2005-01-01
The purpose of this study is to contribute to a theoretical knowledge base through research by examining factors salient to science education reform and practice in the context of socioscientific issues. The study explores how individuals negotiate and resolve genetic engineering dilemmas. A qualitative approach was used to examine patterns of informal reasoning and the role of morality in these processes. Thirty college students participated individually in two semistructured interviews designed to explore their informal reasoning in response to six genetic engineering scenarios. Students demonstrated evidence of rationalistic, emotive, and intuitive forms of informal reasoning. Rationalistic informal reasoning described reason-based considerations; emotive informal reasoning described care-based considerations; and intuitive reasoning described considerations based on immediate reactions to the context of a scenario. Participants frequently relied on combinations of these reasoning patterns as they worked to resolve individual socioscientific scenarios. Most of the participants appreciated at least some of the moral implications of their decisions, and these considerations were typically interwoven within an overall pattern of informal reasoning. These results highlight the need to ensure that science classrooms are environments in which intuition and emotion in addition to reason are valued. Implications and recommendations for future research are discussed.
Geometric Reasoning in an Active-Engagement Upper-Division E&M Classroom
NASA Astrophysics Data System (ADS)
Cerny, Leonard Thomas
A combination of theoretical perspectives is used to create a rich description of student reasoning when facing a highly-geometric electricity and magnetism problem in an upper-division active-engagement physics classroom at Oregon State University. Geometric reasoning as students encounter problem situations ranging from familiar to novel is described using van Zee and Manogue's (2010) ethnography of communication. Bing's (2008) epistemic framing model is used to illuminate how students are framing what they are doing and whether or not they see the problem as geometric. Kuo, Hull, Gupta, and Elby's (2010) blending model and Krutetskii's (1976) model of harmonic reasoning are used to illuminate ways students show problem-solving expertise. Sayer and Wittmann's (2008) model is used to show how resource plasticity impacts students' geometric reasoning and the degree to which students accept incorrect results.
Diagnostic Reasoning and Cognitive Biases of Nurse Practitioners.
Lawson, Thomas N
2018-04-01
Diagnostic reasoning is often used colloquially to describe the process by which nurse practitioners and physicians come to the correct diagnosis, but a rich definition and description of this process has been lacking in the nursing literature. A literature review was conducted with theoretical sampling seeking conceptual insight into diagnostic reasoning. Four common themes emerged: Cognitive Biases and Debiasing Strategies, the Dual Process Theory, Diagnostic Error, and Patient Harm. Relevant cognitive biases are discussed, followed by debiasing strategies and application of the dual process theory to reduce diagnostic error and harm. The accuracy of diagnostic reasoning of nurse practitioners may be improved by incorporating these items into nurse practitioner education and practice. [J Nurs Educ. 2018;57(4):203-208.]. Copyright 2018, SLACK Incorporated.
Using Propensity Scores for Estimating Causal Effects: A Study in the Development of Moral Reasoning
ERIC Educational Resources Information Center
Grunwald, Heidi E.; Mayhew, Matthew J.
2008-01-01
The purpose of this study was to illustrate the use of propensity scores for creating comparison groups, partially controlling for pretreatment course selection bias, and estimating the treatment effects of selected courses on the development of moral reasoning in undergraduate students. Specifically, we used a sample of convenience for comparing…
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases
NASA Astrophysics Data System (ADS)
Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto
2017-09-01
A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.
NASA Astrophysics Data System (ADS)
Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei
2014-10-01
Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary information compared to Co-OK, and BME outperforms RK by integrating the auxiliary data in a probability form.
Osman, Magda; Stavy, Ruth
2006-12-01
Theories of adult reasoning propose that reasoning consists of two functionally distinct systems that operate under entirely different mechanisms. This theoretical framework has been used to account for a wide range of phenomena, which now encompasses developmental research on reasoning and problem solving. We begin this review by contrasting three main dual-system theories of adult reasoning (Evans & Over, 1996; Sloman, 1996; Stanovich & West, 2000) with a well-established developmental account that also incorporates a dual-system framework (Brainerd & Reyna, 2001). We use developmental studies of the formation and application of intuitive rules in science and mathematics to evaluate the claims that these theories make. Overall, the evidence reviewed suggests that what is crucial to understanding how children reason is the saliency of the features that are presented within a task. By highlighting the importance of saliency as a way of understanding reasoning, we aim to provide clarity concerning the benefits and limitations of adopting a dual-system framework to account for evidence from developmental studies of intuitive reasoning.
NASA Astrophysics Data System (ADS)
Kaushik, M.; Kumawat, M.; Singh, U. K.; Saxena, G.
2018-05-01
A theoretical investigation has made on the structure of high spin states of 72-74Kr within the framework of cranked Hartree-Fock-Bogoliubov (CHFB) theory employing a pairing + quadrupole + hexadecapole model interaction. Dependence of shape with the spin, excitation energy, alignment of proton as well as neutron 0g9/2 orbital along with backbending phenomenon are discussed upto a high spin J = 26. We found reasonable agreement with the experimental values and other theoretical calculations.
ERIC Educational Resources Information Center
Morren, Mattijn; Muris, Peter; Kindt, Merel; Schouten, Erik; van den Hout, Marcel
2008-01-01
Emotional and parent-based reasoning refer to the tendency to rely on personal or parental anxiety response information rather than on objective danger information when estimating the dangerousness of a situation. This study investigated the prospective relationships of emotional and parent-based reasoning with anxiety symptoms in a sample of…
A theoretical model for smoking prevention studies in preteen children.
McGahee, T W; Kemp, V; Tingen, M
2000-01-01
The age of the onset of smoking is on a continual decline, with the prime age of tobacco use initiation being 12-14 years. A weakness of the limited research conducted on smoking prevention programs designed for preteen children (ages 10-12) is a well-defined theoretical basis. A theoretical perspective is needed in order to make a meaningful transition from empirical analysis to application of knowledge. Bandura's Social Cognitive Theory (1977, 1986), the Theory of Reasoned Action (Ajzen & Fishbein, 1980), and other literature linking various concepts to smoking behaviors in preteens were used to develop a model that may be useful for smoking prevention studies in preteen children.
Theoretical model for plasmonic photothermal response of gold nanostructures solutions
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.
2018-03-01
Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.
Experimental and theoretical rotordynamic stiffness coefficients for a three-stage brush seal
NASA Astrophysics Data System (ADS)
Pugachev, A. O.; Deckner, M.
2012-08-01
Experimental and theoretical results are presented for a multistage brush seal. Experimental stiffness is obtained from integrating circumferential pressure distribution measured in seal cavities. A CFD analysis is used to predict seal performance. Bristle packs are modeled by the porous medium approach. Leakage is predicted well by the CFD method. Theoretical stiffness coefficients are in reasonable agreement with the measurements. Experimental results are also compared with a three-teeth-on-stator labyrinth seal. The multistage brush seal gives about 60% leakage reduction over the labyrinth seal. Rotordynamic stiffness coefficients are also improved: the brush seal has positive direct stiffness and smaller cross-coupled stiffness.
Script-theory virtual case: A novel tool for education and research.
Hayward, Jake; Cheung, Amandy; Velji, Alkarim; Altarejos, Jenny; Gill, Peter; Scarfe, Andrew; Lewis, Melanie
2016-11-01
Context/Setting: The script theory of diagnostic reasoning proposes that clinicians evaluate cases in the context of an "illness script," iteratively testing internal hypotheses against new information eventually reaching a diagnosis. We present a novel tool for teaching diagnostic reasoning to undergraduate medical students based on an adaptation of script theory. We developed a virtual patient case that used clinically authentic audio and video, interactive three-dimensional (3D) body images, and a simulated electronic medical record. Next, we used interactive slide bars to record respondents' likelihood estimates of diagnostic possibilities at various stages of the case. Responses were dynamically compared to data from expert clinicians and peers. Comparative frequency distributions were presented to the learner and final diagnostic likelihood estimates were analyzed. Detailed student feedback was collected. Over two academic years, 322 students participated. Student diagnostic likelihood estimates were similar year to year, but were consistently different from expert clinician estimates. Student feedback was overwhelmingly positive: students found the case was novel, innovative, clinically authentic, and a valuable learning experience. We demonstrate the successful implementation of a novel approach to teaching diagnostic reasoning. Future study may delineate reasoning processes associated with differences between novice and expert responses.
Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model.
Reyna, Valerie F; Brainerd, Charles J
2011-09-01
From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals-that reasoning biases emerge with development -have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects-that risk preferences shift when the same decisions are phrases in terms of gains versus losses-emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making-prospect theory-can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes.
Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model
Reyna, Valerie F.; Brainerd, Charles J.
2011-01-01
From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals—that reasoning biases emerge with development —have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects—that risk preferences shift when the same decisions are phrases in terms of gains versus losses—emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making—prospect theory—can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes. PMID:22096268
What Eye Movements Can Tell about Theory of Mind in a Strategic Game
Meijering, Ben; van Rijn, Hedderik; Taatgen, Niels A.; Verbrugge, Rineke
2012-01-01
This study investigates strategies in reasoning about mental states of others, a process that requires theory of mind. It is a first step in studying the cognitive basis of such reasoning, as strategies affect tradeoffs between cognitive resources. Participants were presented with a two-player game that required reasoning about the mental states of the opponent. Game theory literature discerns two candidate strategies that participants could use in this game: either forward reasoning or backward reasoning. Forward reasoning proceeds from the first decision point to the last, whereas backward reasoning proceeds in the opposite direction. Backward reasoning is the only optimal strategy, because the optimal outcome is known at each decision point. Nevertheless, we argue that participants prefer forward reasoning because it is similar to causal reasoning. Causal reasoning, in turn, is prevalent in human reasoning. Eye movements were measured to discern between forward and backward progressions of fixations. The observed fixation sequences corresponded best with forward reasoning. Early in games, the probability of observing a forward progression of fixations is higher than the probability of observing a backward progression. Later in games, the probabilities of forward and backward progressions are similar, which seems to imply that participants were either applying backward reasoning or jumping back to previous decision points while applying forward reasoning. Thus, the game-theoretical favorite strategy, backward reasoning, does seem to exist in human reasoning. However, participants preferred the more familiar, practiced, and prevalent strategy: forward reasoning. PMID:23029341
What eye movements can tell about theory of mind in a strategic game.
Meijering, Ben; van Rijn, Hedderik; Taatgen, Niels A; Verbrugge, Rineke
2012-01-01
This study investigates strategies in reasoning about mental states of others, a process that requires theory of mind. It is a first step in studying the cognitive basis of such reasoning, as strategies affect tradeoffs between cognitive resources. Participants were presented with a two-player game that required reasoning about the mental states of the opponent. Game theory literature discerns two candidate strategies that participants could use in this game: either forward reasoning or backward reasoning. Forward reasoning proceeds from the first decision point to the last, whereas backward reasoning proceeds in the opposite direction. Backward reasoning is the only optimal strategy, because the optimal outcome is known at each decision point. Nevertheless, we argue that participants prefer forward reasoning because it is similar to causal reasoning. Causal reasoning, in turn, is prevalent in human reasoning. Eye movements were measured to discern between forward and backward progressions of fixations. The observed fixation sequences corresponded best with forward reasoning. Early in games, the probability of observing a forward progression of fixations is higher than the probability of observing a backward progression. Later in games, the probabilities of forward and backward progressions are similar, which seems to imply that participants were either applying backward reasoning or jumping back to previous decision points while applying forward reasoning. Thus, the game-theoretical favorite strategy, backward reasoning, does seem to exist in human reasoning. However, participants preferred the more familiar, practiced, and prevalent strategy: forward reasoning.
Theoretical and simulated performance for a novel frequency estimation technique
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1993-01-01
A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.
Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J
2018-01-01
Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.
Revised theory of tachyons in general relativity
NASA Astrophysics Data System (ADS)
Schwartz, Charles
2017-08-01
A minus sign is inserted, for good reason, into the formula for the energy-momentum tensor for tachyons. This leads to remarkable theoretical consequences and a plausible explanation for the phenomenon called dark energy in the cosmos.
Jemmott, John B; Jemmott, Loretta Sweet; O'Leary, Ann; Icard, Larry D; Rutledge, Scott E; Stevens, Robin; Hsu, Janet; Stephens, Alisa J
2015-07-01
We examined the efficacy and mediation of Being Responsible for Ourselves (BRO), an HIV/STI risk-reduction intervention for African American men who have sex with men (MSM), the population with the highest HIV-diagnosis rate in the US. We randomized African American MSM to one of two interventions: BRO HIV/STI risk-reduction, targeting condom use; or attention-matched control, targeting physical activity and healthy diet. The interventions were based on social cognitive theory, the reasoned-action approach, and qualitative research. Men reporting anal intercourse with other men in the past 90 days were eligible and completed pre-intervention, immediately post-intervention, and 6 and 12 months post-intervention surveys. Of 595 participants, 503 (85 %) completed the 12-month follow-up. Generalized-estimating-equations analysis indicated that, compared with the attention-matched control intervention, the BRO intervention did not increase consistent condom use averaged over the 6- and 12-month follow-ups, which was the primary outcome. Although BRO did not affect the proportion of condom-protected intercourse acts, unprotected sexual intercourse, multiple partners, or insertive anal intercourse, it did reduce receptive anal intercourse compared with the control, a behavior linked to incident HIV infection. Mediation analysis using the product-of-coefficients approach revealed that although BRO increased seven of nine theoretical constructs it was designed to affect, it increased only one of three theoretical constructs that predicted consistent condom use: condom-use impulse-control self-efficacy. Thus, BRO indirectly increased consistent condom use through condom-use impulse-control self-efficacy. In conclusion, although BRO increased several theoretical constructs, most of those constructs did not predict consistent condom use; hence, the intervention did not increase it. Theoretical constructs that interventions should target to increase African American MSM's condom use are discussed.
Clinical reasoning-embodied meaning-making in physiotherapy.
Chowdhury, Anoop; Bjorbækmo, Wenche Schrøder
2017-07-01
This article examines physiotherapists' lived experience of practicing physiotherapy in primary care, focusing on clinical reasoning and decision-making in the case of a patient we call Eva. The material presented derives from a larger study involving two women participants, both with a protracted history of neck and shoulder pain. A total of eight sessions, all of them conducted by the first author, a professional physiotherapist, in his own practice room, were videotaped, after which the first author transcribed the sessions and added reflective notes. One session emerged as particularly stressful for both parties and is explored in detail in this article. In our analysis, we seek to be attentive to the experiences of physiotherapy displayed and to explore their meaning, significance and uniqueness from a phenomenological perspective. Our research reveals the complexity of integrating multiple theoretical perspectives of practice in clinical decision-making and suggests that a phenomenological perspective can provide insights into clinical encounters through its recognition of embodied knowledge. We argue that good physiotherapy practice demands tactfulness, sensitivity, and the desire to build a cooperative patient-therapist relationship. Informed by theoretical and practical knowledge from multiple disciplines, patient management can evolve and unfold beyond rehearsed routines and theoretical principles.
Vista goes online: Decision-analytic systems for real-time decision-making in mission control
NASA Technical Reports Server (NTRS)
Barry, Matthew; Horvitz, Eric; Ruokangas, Corinne; Srinivas, Sampath
1994-01-01
The Vista project has centered on the use of decision-theoretic approaches for managing the display of critical information relevant to real-time operations decisions. The Vista-I project originally developed a prototype of these approaches for managing flight control displays in the Space Shuttle Mission Control Center (MCC). The follow-on Vista-II project integrated these approaches in a workstation program which currently is being certified for use in the MCC. To our knowledge, this will be the first application of automated decision-theoretic reasoning techniques for real-time spacecraft operations. We shall describe the development and capabilities of the Vista-II system, and provide an overview of the use of decision-theoretic reasoning techniques to the problems of managing the complexity of flight controller displays. We discuss the relevance of the Vista techniques within the MCC decision-making environment, focusing on the problems of detecting and diagnosing spacecraft electromechanical subsystems component failures with limited information, and the problem of determining what control actions should be taken in high-stakes, time-critical situations in response to a diagnosis performed under uncertainty. Finally, we shall outline our current research directions for follow-on projects.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
NASA Astrophysics Data System (ADS)
Hoehn, Jessica R.; Finkelstein, Noah D.
2018-06-01
As part of a research study on student reasoning in quantum mechanics, we examine students' use of ontologies, or the way students' categorically organize entities they are reasoning about. In analyzing three episodes of focus group discussions with modern physics students, we present evidence of the dynamic nature of ontologies, and refine prior theoretical frameworks for thinking about dynamic ontologies. We find that in a given reasoning episode ontologies can be dynamic in construction (referring to when the reasoner constructs the ontologies) or application (referring to which ontologies are applied in a given reasoning episode). In our data, we see instances of students flexibly switching back and forth between parallel stable structures as well as constructing and negotiating new ontologies in the moment. Methodologically, we use a collective conceptual blending framework as an analytic tool for capturing student reasoning in groups. In this research, we value the messiness of student reasoning and argue that reasoning in a tentative manner can be productive for students learning quantum mechanics. As such, we shift away from a binary view of student learning which sees students as either having the correct answer or not.
Sirota, Miroslav; Kostovičová, Lenka; Juanchich, Marie
2014-08-01
Knowing which properties of visual displays facilitate statistical reasoning bears practical and theoretical implications. Therefore, we studied the effect of one property of visual diplays - iconicity (i.e., the resemblance of a visual sign to its referent) - on Bayesian reasoning. Two main accounts of statistical reasoning predict different effect of iconicity on Bayesian reasoning. The ecological-rationality account predicts a positive iconicity effect, because more highly iconic signs resemble more individuated objects, which tap better into an evolutionary-designed frequency-coding mechanism that, in turn, facilitates Bayesian reasoning. The nested-sets account predicts a null iconicity effect, because iconicity does not affect the salience of a nested-sets structure-the factor facilitating Bayesian reasoning processed by a general reasoning mechanism. In two well-powered experiments (N = 577), we found no support for a positive iconicity effect across different iconicity levels that were manipulated in different visual displays (meta-analytical overall effect: log OR = -0.13, 95% CI [-0.53, 0.28]). A Bayes factor analysis provided strong evidence in favor of the null hypothesis-the null iconicity effect. Thus, these findings corroborate the nested-sets rather than the ecological-rationality account of statistical reasoning.
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
The collective and quantum nature of proton transfer in the cyclic water tetramer on NaCl(001)
NASA Astrophysics Data System (ADS)
Feng, Yexin; Wang, Zhichang; Guo, Jing; Chen, Ji; Wang, En-Ge; Jiang, Ying; Li, Xin-Zheng
2018-03-01
Proton tunneling is an elementary process in the dynamics of hydrogen-bonded systems. Collective tunneling is known to exist for a long time. Atomistic investigations of this mechanism in realistic systems, however, are scarce. Using a combination of ab initio theoretical and high-resolution experimental methods, we investigate the role played by the protons on the chirality switching of a water tetramer on NaCl(001). Our scanning tunneling spectroscopies show that partial deuteration of the H2O tetramer with only one D2O leads to a significant suppression of the chirality switching rate at a cryogenic temperature (T), indicating that the chirality switches by tunneling in a concerted manner. Theoretical simulations, in the meantime, support this picture by presenting a much smaller free-energy barrier for the translational collective proton tunneling mode than other chirality switching modes at low T. During this analysis, the virial energy provides a reasonable estimator for the description of the nuclear quantum effects when a traditional thermodynamic integration method cannot be used, which could be employed in future studies of similar problems. Given the high-dimensional nature of realistic systems and the topology of the hydrogen-bonded network, collective proton tunneling may exist more ubiquitously than expected. Systems of this kind can serve as ideal platforms for studies of this mechanism, easily accessible to high-resolution experimental measurements.
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Explosive magnetic reconnection caused by an X-shaped current-vortex layer in a collisionless plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirota, M.; Hattori, Y.; Morrison, P. J.
2015-05-15
A mechanism for explosive magnetic reconnection is investigated by analyzing the nonlinear evolution of a collisionless tearing mode in a two-fluid model that includes the effects of electron inertia and temperature. These effects cooperatively enable a fast reconnection by forming an X-shaped current-vortex layer centered at the reconnection point. A high-resolution simulation of this model for an unprecedentedly small electron skin depth d{sub e} and ion-sound gyroradius ρ{sub s}, satisfying d{sub e}=ρ{sub s}, shows an explosive tendency for nonlinear growth of the tearing mode, where it is newly found that the explosive widening of the X-shaped layer occurs locally aroundmore » the reconnection point with the length of the X shape being shorter than the domain length and the wavelength of the linear tearing mode. The reason for the onset of this locally enhanced reconnection is explained theoretically by developing a novel nonlinear and nonequilibrium inner solution that models the local X-shaped layer, and then matching it to an outer solution that is approximated by a linear tearing eigenmode with a shorter wavelength than the domain length. This theoretical model proves that the local reconnection can release the magnetic energy more efficiently than the global one and the estimated scaling of the explosive growth rate agrees well with the simulation results.« less
On theoretical and experimental modeling of metabolism forming in prebiotic systems
NASA Astrophysics Data System (ADS)
Bartsev, S. I.; Mezhevikin, V. V.
Recently searching for extraterrestrial life attracts more and more attention However the searching hardly can be effective without sufficiently universal concept of life origin which incidentally tackles a problem of origin of life on the Earth A concept of initial stages of life origin including origin of prebiotic metabolism is stated in the paper Suggested concept eliminates key difficulties in the problem of life origin and allows experimental verification of it According to the concept the predecessor of living beings has to be sufficiently simple to provide non-zero probability of self-assembling during short in geological or cosmic scale time In addition the predecessor has to be capable of autocatalysis and further complication evolution A possible scenario of initial stage of life origin which can be realized both on other planets and inside experimental facility is considered In the scope of the scenario a theoretical model of multivariate oligomeric autocatalyst coupled with phase-separated particle is presented Results of computer simulation of possible initial stage of chemical evolution are shown Conducted estimations show the origin of autocatalytic oligomeric phase-separated system is possible at reasonable values of kinetic parameters of involved chemical reactions in a small-scale flow reactor Accepted statements allowing to eliminate key problems of life origin imply important consequence -- organisms emerged out of the Earth or inside a reactor have to be based on another different from terrestrial biochemical
Vegetation Cover based on Eagleson's Ecohydrological Optimality in Northeast China Transect (NECT)
NASA Astrophysics Data System (ADS)
Cong, Z.; Mo, K.; Qinshu, L.; Zhang, L.
2016-12-01
Vegetation is considered as the indicator of climate, thus the study of vegetation growth and distribution is of great importance to cognize the ecosystem construction and functions. Vegetation cover is used as an important index to describe vegetation conditions. In Eagleson's ecohydrological optimality, the theoretical optimal vegetation cover M* can be estimated by solving water balance equations. In this study, the theory is applied in the Northeast China Transect (NECT), one of International Geosphere-Biosphere Programs (IGBP) terrestrial transects. The spatial distribution of actual vegetation cover M, which is derived from Normalized Vegetation Index (NDVI) from Moderate-resolution Imaging Spectroradiometer (MODIS), shows that there is a significant gradient ranging from 1 in the east forests to 0 in the west desert. The result indicates that the theoretical M* fits the actual M well (for forest, M* = 0.822 while M = 0.826; for grassland, M* = 0.353 while M = 0.352; the correlation coefficient between M and M* is 0.81). The reasonable calculated proportion of water balance components further demonstrates the applicability of the ecohydrological optimality theory. M* increases with the increase of LAI, leaf angle, stem fraction and temperature, and decreases with the increase of precipitation amount. This method offers the possibility to analyze the impacts of climate change to vegetation cover quantitatively, thus providing advices for eco-restoration projects.
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Accounting and Information Management Div.
This report finds problems in the ability of the five major federal credit agencies to reasonably estimate subsidy costs related to the $216.6 billion in direct loans and $712.4 billion in loan guarantees issued by the federal government. The five agencies are the Small Business Administration (SBA) and the departments of Education, Housing and…
1947-04-01
EXPERS5EHTAL RESULTS By G-arry C. Myers, Jr. STHMÄRY Hi order to provide " basic data on helicopter rotor-"blade motion, photographic .records of...ABOUT 2HE AXIS OF NO FEATHERING Reason for conversion.- At the time that the " basic theoretical treatments, such as that of reference 1, were made...of the machanical means used for achieving it. This fact may be confirmed by inspection but has also been demonstrated mathematically in reference
Perera, Harsha N
2016-01-01
Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
Karogodina, Tatiana Y; Dranov, Igor G; Sergeeva, Svetlana V; Stass, Dmitry V; Steiner, Ulrich E
2011-06-20
Oxidation of dihydrorhodamine 123 (DHR) to rhodamine 123 (RH) by oxoperoxonitrite (ONOO(-)), formed through recombination of NO and O(2)(·-) radicals resulting from thermal decomposition of 3-morpholinosydnonimine (SIN-1) in buffered aerated aqueous solution at pH 7.6, represents a kinetic model system of the reactivity of NO and O(2)(·-) in biochemical systems. A magnetic-field effect (MFE) on the yield of RH detected in this system is explored in the full range of fields between 0 and 18 T. It is found to increase in a nearly linear fashion up to a value of 5.5±1.6 % at 18 T and 23 °C (3.1±0.7 % at 40 °C). A theoretical framework to analyze the MFE in terms of the magnetic-field-enhanced recombination rate constant k(rec) of NO and O(2)(·-) due to magnetic mixing of T(0) and S spin states of the radical pair by the Δg mechanism is developed, including estimation of magnetic properties (g tensor and spin relaxation times) of NO and O(2)(·-) in aqueous solution, and calculation of the MFE on k(rec) using the theoretical formalism of Gorelik at al. The factor with which the MFE on k(rec) is translated to the MFE on the yield of ONOO(-) and RH is derived for various kinetic scenarios representing possible sink channels for NO and O(2)(·-). With reasonable assumptions for the values of some unknown kinetic parameters, the theoretical predictions account well for the observed MFE. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
El Hussein, Mohamed; Hirst, Sandra
2016-02-01
The aim of this study was to construct a grounded theory that explains the clinical reasoning processes that registered nurses use to recognize delirium in older adults in acute care hospitals. Delirium is under-recognized in acute hospital settings, this may stem from underdeveloped clinical reasoning processes. Little is known about registered nurses' (RNs) clinical reasoning processes in complex situations such as delirium recognition. A grounded theory approach was used to analyse interview data about the clinical reasoning processes of RNs in acute hospital settings. Seventeen RNs were recruited. Concurrent data collection and comparative analysis and theoretical sampling were conducted in 2013-2014. The core category to emerge from the data was 'chasing the mirage', which describes RNs' clinical reasoning processes to recognize delirium during their interaction with older adults. Understanding the reasoning that contributes to delirium under-recognition provides a strategy by which, this problem can be brought to the forefront of RNs' awareness and intervention. Delirium recognition will contribute to quality care for older adults. © 2015 John Wiley & Sons Ltd.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
Emotional Reasoning and Parent-Based Reasoning in Normal Children
ERIC Educational Resources Information Center
Morren, Mattijn; Muris, Peter; Kindt, Merel
2004-01-01
A previous study by Muris, Merckelbach, and Van Spauwen [1] demonstrated that children display emotional reasoning irrespective of their anxiety levels. That is, when estimating whether a situation is dangerous, children not only rely on objective danger information but also on their "own" anxiety-response. The present study further examined…
NASA Technical Reports Server (NTRS)
Bugbee, B.; Monje, O.
1992-01-01
Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.
Pazesh, Samaneh; Lazorova, Lucia; Berggren, Jonas; Alderborn, Göran; Gråsjö, Johan
2016-09-10
The main purpose of the study was to evaluate various pre-processing and quantification approaches of Raman spectrum to quantify low level of amorphous content in milled lactose powder. To improve the quantification analysis, several spectral pre-processing methods were used to adjust background effects. The effects of spectral noise on the variation of determined amorphous content were also investigated theoretically by propagation of error analysis and were compared to the experimentally obtained values. Additionally, the applicability of calibration method with crystalline or amorphous domains in the estimation of amorphous content in milled lactose powder was discussed. Two straight baseline pre-processing methods gave the best and almost equal performance. By the succeeding quantification methods, PCA performed best, although the classical least square analysis (CLS) gave comparable results, while peak parameter analysis displayed to be inferior. The standard deviations of experimental determined percentage amorphous content were 0.94% and 0.25% for pure crystalline and pure amorphous samples respectively, which was very close to the standard deviation values from propagated spectral noise. The reasonable conformity between the milled samples spectra and synthesized spectra indicated representativeness of physical mixtures with crystalline or amorphous domains in the estimation of apparent amorphous content in milled lactose. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
DETECTION OF A STELLAR STREAM BEHIND OPEN CLUSTER NGC 188: ANOTHER PART OF THE MONOCEROS STREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casetti-Dinescu, Dana I.; Girard, Terrence M.; Van Altena, William F.
2010-05-15
We present results from a WIYN/Orthogonal Parallel Transfer Imaging Camera photometric and astrometric survey of the field of the open cluster NGC 188 ((l, b) = (122.{sup 0}8, 22.{sup 0}5)). We combine these results with the proper-motion and photometry catalog of Platais et al. and demonstrate the existence of a stellar overdensity in the background of NGC 188. The theoretical isochrone fits to the color-magnitude diagram of the overdensity are consistent with an age between 6 and 10 Gyr and an intermediately metal poor population ([Fe/H] = -0.5 to -1.0). The distance to the overdensity is estimated to be betweenmore » 10.0 and 12.6 kpc. The proper motions indicate that the stellar population of the overdensity is kinematically cold. The distance estimate and the absolute proper motion of the overdensity agree reasonably well with the predictions of the Penarrubia et al. model of the formation of the Monoceros stream. Orbits for this material constructed with plausible radial-velocity values, indicate that dynamically, this material is unlikely to belong to the thick disk. Taken together, this evidence suggests that the newly found overdensity is part of the Monoceros stream.« less
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Ultrasonically triggered ignition at liquid surfaces.
Simon, Lars Hendrik; Meyer, Lennart; Wilkens, Volker; Beyer, Michael
2015-01-01
Ultrasound is considered to be an ignition source according to international standards, setting a threshold value of 1mW/mm(2) [1] which is based on theoretical estimations but which lacks experimental verification. Therefore, it is assumed that this threshold includes a large safety margin. At the same time, ultrasound is used in a variety of industrial applications where it can come into contact with explosive atmospheres. However, until now, no explosion accidents have been reported in connection with ultrasound, so it has been unclear if the current threshold value is reasonable. Within this paper, it is shown that focused ultrasound coupled into a liquid can in fact ignite explosive atmospheres if a specific target positioned at a liquid's surface converts the acoustic energy into a hot spot. Based on ignition tests, conditions could be derived that are necessary for an ultrasonically triggered explosion. These conditions show that the current threshold value can be significantly augmented. Copyright © 2014 Elsevier B.V. All rights reserved.
Pivel, María Alejandra Gómez; Dal Sasso Freitas, Carla Maria
2010-08-01
Numerical models that predict the fate of drilling discharges at sea constitute a valuable tool for both the oil industry and regulatory agencies. In order to provide reliable estimates, models must be validated through the comparison of predictions with field or laboratory observations. In this paper, we used the Offshore Operators Committee Model to simulate the discharges from two wells drilled at Campos Basin, offshore SE Brazil, and compared the results with field observations obtained 3 months after drilling. The comparison showed that the model provided reasonable predictions, considering that data about currents were reconstructed and theoretical data were used to characterize the classes of solids. The model proved to be a valuable tool to determine the degree of potential impact associated to drilling activities. However, since the accuracy of the model is directly dependent on the quality of input data, different possible scenarios should be considered when used for forecast modeling.
NASA Astrophysics Data System (ADS)
Jiménez, Pilar; Roux, María Victoria; Dávalos, Juan Z.; Temprado, Manuel; Ribeiro da Silva, Manuel A. V.; Ribeiro da Silva, Maria Das Dores M. C.; Amaral, Luísa M. P. F.; Cabildo, Pilar; Claramunt, Rosa M.; Mó, Otilia; Yáñez, Manuel; Elguero, José
The enthalpies of combustion, heat capacities, enthalpies of sublimation and enthalpies of formation of 2-methylbenzimidazole (2MeBIM) and 2-ethylbenzimidazole (2EtBIM) are reported and the results compared with those of benzimidazole itself (BIM). Theoretical estimates of the enthalpies of formation were obtained through the use of atom equivalent schemes. The necessary energies were obtained in single-point calculations at the B3LYP/6-311+G(d,p) on B3LYP/6-31G* optimized geometries. The comparison of experimental and calculated values of benzenes, imidazoles and benzimidazoles bearing H (unsubstituted), methyl and ethyl groups shows remarkable homogeneity. The energetic group contribution transferability is not followed, but either using it or adding an empirical interaction term, it is possible to generate an enormous collection of reasonably accurate data for different substituted heterocycles (pyrazole-derivatives, pyridine-derivatives, etc.) from the large amount of values available for substituted benzenes and those of the parent (pyrazole, pyridine) heterocycles.
Impact of Distance Determinations on Galactic Structure. I. Young and Intermediate-Age Tracers
NASA Astrophysics Data System (ADS)
Matsunaga, Noriyuki; Bono, Giuseppe; Chen, Xiaodian; de Grijs, Richard; Inno, Laura; Nishiyama, Shogo
2018-06-01
Here we discuss impacts of distance determinations on the Galactic disk traced by relatively young objects. The Galactic disk, ˜40 kpc in diameter, is a cross-road of studies on the methods of measuring distances, interstellar extinction, evolution of galaxies, and other subjects of interest in astronomy. A proper treatment of interstellar extinction is, for example, crucial for estimating distances to stars in the disk outside the small range of the solar neighborhood. We'll review the current status of relevant studies and discuss some new approaches to the extinction law. When the extinction law is reasonably constrained, distance indicators found in today and future surveys are telling us stellar distribution and more throughout the Galactic disk. Among several useful distance indicators, the focus of this review is Cepheids and open clusters (especially contact binaries in clusters). These tracers are particularly useful for addressing the metallicity gradient of the Galactic disk, an important feature for which comparison between observations and theoretical models can reveal the evolution of the disk.
Enhanced air pollution via aerosol-boundary layer feedback in China.
Petäjä, T; Järvi, L; Kerminen, V-M; Ding, A J; Sun, J N; Nie, W; Kujansuu, J; Virkkula, A; Yang, X-Q; Fu, C B; Zilitinkevich, S; Kulmala, M
2016-01-12
Severe air pollution episodes have been frequent in China during the recent years. While high emissions are the primary reason for increasing pollutant concentrations, the ultimate cause for the most severe pollution episodes has remained unclear. Here we show that a high concentration of particulate matter (PM) will enhance the stability of an urban boundary layer, which in turn decreases the boundary layer height and consequently cause further increases in PM concentrations. We estimate the strength of this positive feedback mechanism by combining a new theoretical framework with ambient observations. We show that the feedback remains moderate at fine PM concentrations lower than about 200 μg m(-3), but that it becomes increasingly effective at higher PM loadings resulting from the combined effect of high surface PM emissions and massive secondary PM production within the boundary layer. Our analysis explains why air pollution episodes are particularly serious and severe in megacities and during the days when synoptic weather conditions stay constant.
Statistical distributions of extreme dry spell in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Jemain, Abdul Aziz
2010-11-01
Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.
Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2017-01-01
Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487
Yang, Szu-Chi; Lin, Huan-Chun; Liu, Tzu-Ming; Lu, Jen-Tang; Hung, Wan-Ting; Huang, Yu-Ru; Tsai, Yi-Chun; Kao, Chuan-Liang; Chen, Shih-Yuan; Sun, Chi-Kuang
2015-01-01
Virus is known to resonate in the confined-acoustic dipolar mode with microwave of the same frequency. However this effect was not considered in previous virus-microwave interaction studies and microwave-based virus epidemic prevention. Here we show that this structure-resonant energy transfer effect from microwaves to virus can be efficient enough so that airborne virus was inactivated with reasonable microwave power density safe for the open public. We demonstrate this effect by measuring the residual viral infectivity of influenza A virus after illuminating microwaves with different frequencies and powers. We also established a theoretical model to estimate the microwaves power threshold for virus inactivation and good agreement with experiments was obtained. Such structure-resonant energy transfer induced inactivation is mainly through physically fracturing the virus structure, which was confirmed by real-time reverse transcription polymerase chain reaction. These results provide a pathway toward establishing a new epidemic prevention strategy in open public for airborne virus. PMID:26647655
Multiple sup 3 H-oxytocin binding sites in rat myometrial plasma membranes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crankshaw, D.; Gaspar, V.; Pliska, V.
1990-01-01
The affinity spectrum method has been used to analyse binding isotherms for {sup 3}H-oxytocin to rat myometrial plasma membranes. Three populations of binding sites with dissociation constants (Kd) of 0.6-1.5 x 10(-9), 0.4-1.0 x 10(-7) and 7 x 10(-6) mol/l were identified and their existence verified by cluster analysis based on similarities between Kd, binding capacity and Hill coefficient. When experimental values were compared to theoretical curves constructed using the estimated binding parameters, good fits were obtained. Binding parameters obtained by this method were not influenced by the presence of GTP gamma S (guanosine-5'-O-3-thiotriphosphate) in the incubation medium. The bindingmore » parameters agree reasonably well with those found in uterine cells, they support the existence of a medium affinity site and may allow for an explanation of some of the discrepancies between binding and response in this system.« less
NASA Astrophysics Data System (ADS)
Wong, Meng Fei; Heng, Xiangxin; Zeng, Kaiyang
2008-10-01
Domain structures of [001]T and [011]T-cut Pb(Zn1/3Nb2/3)O3-(6%-7%)PbTiO3 (PZN-PT) single crystals are studied using scanning electron acoustic microscope (SEAM) technique. The observation of the orientation of domain walls agree reasonably well with the trigonometric projection of rhombohedral and orthorhombic dipoles on the (001) and (011) surfaces, respectively. After mechanical loading with microindentation, domain switching is also observed to form a hyperbolic butterfly shape and extend preferentially along four diagonal directions, i.e., ⟨110⟩ on (001) surface and ⟨111¯⟩ on (011) surface. The critical shear stress to cause domain switching for PZN-PT crystal is estimated to be approximately 49 MPa for both {110} and {111¯} planes based on theoretical analysis. Generally, the SEAM technique has been successfully demonstrated to be a valid technique for observation of domain structures in single crystal PZN-PTs.
Adsorption and Exchange Kinetics of Hydrophilic and Hydrophobic Phosphorus Ligands on Gold Surface
NASA Astrophysics Data System (ADS)
Zhuge, X. Q.; Bian, Z. C.; Luo, Z. H.; Mu, Y. Y.; Luo, K.
2017-02-01
The adsorption kinetics process of hydrophobic ligand (triphenylphosphine, PPh3) and hydrophilic ligand (tris(hydroxymethyl)phosphine oxide, THPO) on the surface of gold electrode were estimated by using electrical double layer capacitance (EDLC). Results showed that the adsorption process of both ligands included fast and slow adsorption processes, and the fast adsorption process could fit the first order kinetic equation of Langmuir adsorption isotherm. During the slow adsorption process, the surface coverage (θ) of PPh3 was higher than that of THPO due to the larger adsorption kinetic constant of PPh3 than that of THPO, which implied that PPh3 could replace THPO on the gold electrode. The exchange process of both ligands on the surface of gold electrode proved that PPh3 take the place of THPO by testing the variation of EDLC which promote the preparation of Janus gold, and the theoretic simulation explained the reason of ligands exchange from the respect of energy..
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
Predicting Stability Constants for Uranyl Complexes Using Density Functional Theory
Vukovic, Sinisa; Hay, Benjamin P.; Bryantsev, Vyacheslav S.
2015-04-02
The ability to predict the equilibrium constants for the formation of 1:1 uranyl:ligand complexes (log K 1 values) provides the essential foundation for the rational design of ligands with enhanced uranyl affinity and selectivity. We also use density functional theory (B3LYP) and the IEFPCM continuum solvation model to compute aqueous stability constants for UO 2 2+ complexes with 18 donor ligands. Theoretical calculations permit reasonably good estimates of relative binding strengths, while the absolute log K 1 values are significantly overestimated. Accurate predictions of the absolute log K 1 values (root mean square deviation from experiment < 1.0 for logmore » K 1 values ranging from 0 to 16.8) can be obtained by fitting the experimental data for two groups of mono and divalent negative oxygen donor ligands. The utility of correlations is demonstrated for amidoxime and imide dioxime ligands, providing a useful means of screening for new ligands with strong chelate capability to uranyl.« less
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Experimental Clocking of Nanomagnets with Strain for Ultralow Power Boolean Logic.
D'Souza, Noel; Salehi Fashami, Mohammad; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha
2016-02-10
Nanomagnetic implementations of Boolean logic have attracted attention because of their nonvolatility and the potential for unprecedented overall energy-efficiency. Unfortunately, the large dissipative losses that occur when nanomagnets are switched with a magnetic field or spin-transfer-torque severely compromise the energy-efficiency. Recently, there have been experimental reports of utilizing the Spin Hall effect for switching magnets, and theoretical proposals for strain induced switching of single-domain magnetostrictive nanomagnets, that might reduce the dissipative losses significantly. Here, we experimentally demonstrate, for the first time that strain-induced switching of single-domain magnetostrictive nanomagnets of lateral dimensions ∼200 nm fabricated on a piezoelectric substrate can implement a nanomagnetic Boolean NOT gate and steer bit information unidirectionally in dipole-coupled nanomagnet chains. On the basis of the experimental results with bulk PMN-PT substrates, we estimate that the energy dissipation for logic operations in a reasonably scaled system using thin films will be a mere ∼1 aJ/bit.
Yang, Szu-Chi; Lin, Huan-Chun; Liu, Tzu-Ming; Lu, Jen-Tang; Hung, Wan-Ting; Huang, Yu-Ru; Tsai, Yi-Chun; Kao, Chuan-Liang; Chen, Shih-Yuan; Sun, Chi-Kuang
2015-12-09
Virus is known to resonate in the confined-acoustic dipolar mode with microwave of the same frequency. However this effect was not considered in previous virus-microwave interaction studies and microwave-based virus epidemic prevention. Here we show that this structure-resonant energy transfer effect from microwaves to virus can be efficient enough so that airborne virus was inactivated with reasonable microwave power density safe for the open public. We demonstrate this effect by measuring the residual viral infectivity of influenza A virus after illuminating microwaves with different frequencies and powers. We also established a theoretical model to estimate the microwaves power threshold for virus inactivation and good agreement with experiments was obtained. Such structure-resonant energy transfer induced inactivation is mainly through physically fracturing the virus structure, which was confirmed by real-time reverse transcription polymerase chain reaction. These results provide a pathway toward establishing a new epidemic prevention strategy in open public for airborne virus.
NASA Technical Reports Server (NTRS)
Gallo, Christopher A.; Agui, Juan H.; Creager, Colin M.; Oravec, Heather A.
2012-01-01
An Excavation System Model has been written to simulate the collection and transportation of regolith on the moon. The calculations in this model include an estimation of the forces on the digging tool as a result of excavation into the regolith. Verification testing has been performed and the forces recorded from this testing were compared to the calculated theoretical data. The Northern Centre for Advanced Technology Inc. rovers were tested at the NASA Glenn Research Center Simulated Lunar Operations facility. This testing was in support of the In-Situ Resource Utilization program Innovative Partnership Program. Testing occurred in soils developed at the Glenn Research Center which are a mixture of different types of sands and whose soil properties have been well characterized. This testing is part of an ongoing correlation of actual field test data to the blade forces calculated by the Excavation System Model. The results from this series of tests compared reasonably with the predicted values from the code.
Sabra, Karim G
2010-06-01
It has been demonstrated theoretically and experimentally that an estimate of the Green's function between two receivers can be obtained by cross-correlating acoustic (or elastic) ambient noise recorded at these two receivers. Coherent wavefronts emerge from the noise cross-correlation time function due to the accumulated contributions over time from noise sources whose propagation path pass through both receivers. Previous theoretical studies of the performance of this passive imaging technique have assumed that no relative motion between noise sources and receivers occurs. In this article, the influence of noise sources motion (e.g., aircraft or ship) on this passive imaging technique was investigated theoretically in free space, using a stationary phase approximation, for stationary receivers. The theoretical results were extended to more complex environments, in the high-frequency regime, using first-order expansions of the Green's function. Although sources motion typically degrades the performance of wideband coherent processing schemes, such as time-delay beamforming, it was found that the Green's function estimated from ambient noise cross-correlations are not expected to be significantly affected by the Doppler effect, even for supersonic sources. Numerical Monte-Carlo simulations were conducted to confirm these theoretical predictions for both cases of subsonic and supersonic moving sources.
NASA Astrophysics Data System (ADS)
Wang, Chao; Xiao, Jun; Luo, Xiaobing
2016-10-01
The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.
True or false: do 5-year-olds understand belief?
Fabricius, William V; Boyer, Ty W; Weimer, Amy A; Carroll, Kathleen
2010-11-01
In 3 studies (N = 188) we tested the hypothesis that children use a perceptual access approach to reason about mental states before they understand beliefs. The perceptual access hypothesis predicts a U-shaped developmental pattern of performance in true belief tasks, in which 3-year-olds who reason about reality should succeed, 4- to 5-year-olds who use perceptual access reasoning should fail, and older children who use belief reasoning should succeed. The results of Study 1 revealed the predicted pattern in 2 different true belief tasks. The results of Study 2 disconfirmed several alternate explanations based on possible pragmatic and inhibitory demands of the true belief tasks. In Study 3, we compared 2 methods of classifying individuals according to which 1 of the 3 reasoning strategies (reality reasoning, perceptual access reasoning, belief reasoning) they used. The 2 methods gave converging results. Both methods indicated that the majority of children used the same approach across tasks and that it was not until after 6 years of age that most children reasoned about beliefs. We conclude that because most prior studies have failed to detect young children's use of perceptual access reasoning, they have overestimated their understanding of false beliefs. We outline several theoretical implications that follow from the perceptual access hypothesis.
Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.
2010-12-01
The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.
NASA Astrophysics Data System (ADS)
Vickers, H.; Baddeley, L.
2011-11-01
RF heating of the F region plasma at high latitudes has long been known to produce electron temperature increases that can vary from tens to hundreds of percent above the background, unperturbed level. In contrast, artificial ionospheric modification experiments conducted using the Space Plasma Exploration by Active Radar (SPEAR) heating facility on Svalbard have often failed to produce obvious enhancements in the electron temperatures when measured using the European Incoherent Scatter Svalbard radar (ESR), colocated with the heater. Contamination of the ESR ion line spectra by the zero-frequency purely growing mode (PGM) feature is known to persist at varying amplitudes throughout SPEAR heating, and such spectral features can lead to significant temperature underestimations when the incoherent scatter spectra are analyzed using conventional methods. In this study, we present the first results of applying a recently developed technique to correct the PGM-contaminated spectra to SPEAR-enhanced ESR spectra and derive an alternative estimate of the SPEAR-heated electron temperature. We discuss how the effectiveness of the spectrum corrections can be affected by the data variance, estimated over the integration period. The subsequent electron temperatures, inferred from corrected spectra, range from a few tens to a few hundred Kelvin above the average background temperature. These temperatures are found to be in reasonable agreement with the theoretical “enhanced” temperature, calculated for the peak of the stationary temperature perturbation profile, when realistic absorption effects are accounted for.
Earthquake Loss Scenarios: Warnings about the Extent of Disasters
NASA Astrophysics Data System (ADS)
Wyss, M.; Tolis, S.; Rosset, P.
2016-12-01
It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek earthquakes by average factors of 3.0 and 1.9, respectively. In addition, it would be desirable to estimate the expected monetary losses by adding a data layer for values of the various building types present.
A Method to Estimate the Masses of Asymptotic Giant Branch Variable Stars
NASA Astrophysics Data System (ADS)
Takeuti, Mine; Nakagawa, Akiharu; Kurayama, Tomoharu; Honma, Mareki
2013-06-01
AGB variable stars are at the transient phase between low and high mass-loss rates; estimating the masses of these stars is necessary to study the evolutionary processes and mass-loss processes during the AGB stage. We applied the pulsation constant theoretically derived by Xiong and Deng (2007 MNRAS, 378, 1270) to 15 galactic AGB stars in order to estimate their masses. We found that using the pulsation constant is effective to estimate the mass of a star pulsating with two different pulsation modes, such as S Crt and RX Boo, which provides mass estimates comparable to theoretical results of AGB star evolution. We also extended the use of the pulsation constant to single-mode variables, and analyzed the properties of AGB stars related to their masses.
The Effect of Major Organizational Policy on Employee Attitudes Toward Graduate Degrees
2006-03-01
on the type of intention being assessed - measure of intention and measure of estimate ( Fishbein & Ajzen , 1975). The former is used to predict...motivated to pursue graduate degrees. Therefore, the Model of Reasoned Action’s measurement of estimate for goal achievement ( Fishbein & Ajzen , 1975...Five Years The measurement of intention from the Model of Reasoned Action for predicting the performance of a behavior ( Fishbein & Ajzen , 1975) was
The stratosphere: Present and future
NASA Technical Reports Server (NTRS)
Hudson, R. D. (Editor); Reed, E. I. (Editor)
1979-01-01
The present status of stratospheric science is discussed. The three basic elements of stratospheric science-laboratory measurements, atmospheric observations, and theoretical studies are presented along with an attempt to predict, with reasonable confidence, the effect on ozone of particular anthropogenic sources of pollution.
NASA Astrophysics Data System (ADS)
Jinyan, Liu
2014-03-01
The Institute of Theoretical Physics (ITP), Chinese academy of Sciences (CAS), founded in June 1978, is a specialized institute studying major issues in the fundamental research of theoretical physics. ITP has played an important role in the development of theoretical physics in China, especially in organizing and undertaking major national projects, expanding international exchanges and cooperation, and nurturing advanced researchers. My presentation will examine the reasons why ITP was founded in 1978 and why Peng Huanwu and Zhou Guangzhao, two prominent Chinese theorists, were chosen as the first and second directors of ITP. Moreover, I will summarize ITP's scientific activities and achievements in the past 35 years. Last but not least, I will compare ITP with university physics departments and explore its unique characters (both strength and weakness).
Bai, Feng-Yang; Ma, Yuan; Lv, Shuang; Pan, Xiu-Mei; Jia, Xiu-Juan
2017-01-01
In this study, the mechanistic and kinetic analysis for reactions of CF3OCH(CF3)2 and CF3OCF2CF2H with OH radicals and Cl atoms have been performed at the CCSD(T)//B3LYP/6-311++G(d,p) level. Kinetic isotope effects for reactions CF3OCH(CF3)2/CF3OCD(CF3)2 and CF3OCF2CF2H/CF3OCF2CF2D with OH and Cl were estimated so as to provide the theoretical estimation for future laboratory investigation. All rate constants, computed by canonical variational transition state theory (CVT) with the small-curvature tunneling correction (SCT), are in reasonable agreement with the limited experimental data. Standard enthalpies of formation for the species were also calculated. Atmospheric lifetime and global warming potentials (GWPs) of the reaction species were estimated, the large lifetimes and GWPs show that the environmental impact of them cannot be ignored. The organic nitrates can be produced by the further oxidation of CF3OC(•)(CF3)2 and CF3OCF2CF2• in the presence of O2 and NO. The subsequent decomposition pathways of CF3OC(O•)(CF3)2 and CF3OCF2CF2O• radicals were studied in detail. The derived Arrhenius expressions for the rate coefficients over 230–350 K are: k T(1) = 5.00 × 10−24T3.57 exp(−849.73/T), k T(2) = 1.79 × 10−24T4.84 exp(−4262.65/T), kT(3) = 1.94 × 10−24 T4.18 exp(−884.26/T), and k T(4) = 9.44 × 10−28T5.25 exp(−913.45/T) cm3 molecule−1 s−1. PMID:28067283
NASA Astrophysics Data System (ADS)
Bai, Feng-Yang; Ma, Yuan; Lv, Shuang; Pan, Xiu-Mei; Jia, Xiu-Juan
2017-01-01
In this study, the mechanistic and kinetic analysis for reactions of CF3OCH(CF3)2 and CF3OCF2CF2H with OH radicals and Cl atoms have been performed at the CCSD(T)//B3LYP/6-311++G(d,p) level. Kinetic isotope effects for reactions CF3OCH(CF3)2/CF3OCD(CF3)2 and CF3OCF2CF2H/CF3OCF2CF2D with OH and Cl were estimated so as to provide the theoretical estimation for future laboratory investigation. All rate constants, computed by canonical variational transition state theory (CVT) with the small-curvature tunneling correction (SCT), are in reasonable agreement with the limited experimental data. Standard enthalpies of formation for the species were also calculated. Atmospheric lifetime and global warming potentials (GWPs) of the reaction species were estimated, the large lifetimes and GWPs show that the environmental impact of them cannot be ignored. The organic nitrates can be produced by the further oxidation of CF3OC(•)(CF3)2 and CF3OCF2CF2• in the presence of O2 and NO. The subsequent decomposition pathways of CF3OC(O•)(CF3)2 and CF3OCF2CF2O• radicals were studied in detail. The derived Arrhenius expressions for the rate coefficients over 230-350 K are: k T(1) = 5.00 × 10-24T3.57 exp(-849.73/T), k T(2) = 1.79 × 10-24T4.84 exp(-4262.65/T), kT(3) = 1.94 × 10-24 T4.18 exp(-884.26/T), and k T(4) = 9.44 × 10-28T5.25 exp(-913.45/T) cm3 molecule-1 s-1.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.
2009-12-01
The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.
Diagnosing and dealing with multicollinearity.
Schroeder, M A
1990-04-01
The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.
[Theoretical model study about the application risk of high risk medical equipment].
Shang, Changhao; Yang, Fenghui
2014-11-01
Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Principles of reasoning in historical epidemiology.
Tulodziecki, Dana
2012-10-01
The case of John Snow has long been important to epidemiologists and public health officials. However, despite the fact that there have been many discussions about the various aspects of Snow's case, there has been virtually no discussion about what guided Snow's reasoning in his coming to believe his various conclusions about cholera. Here, I want to take up this question in some detail and show that there are a number of specific principles of reasoning that played a crucial role for Snow. Moreover, these principles were epistemologically important to Snow, a fact about which Snow is explicit in many places. An analysis of Snow's case suggests that, because of the epistemic role such principles of reasoning can play, health care practitioners ought to understand their practices to be theoretically informed in these ways, and not just data driven. © 2012 Blackwell Publishing Ltd.
Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue
NASA Astrophysics Data System (ADS)
González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.
2013-04-01
Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.
Poisson sampling - The adjusted and unadjusted estimator revisited
Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas
1998-01-01
The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...
Cutler, Timothy D; Wang, Chong; Hoff, Steven J; Kittawornrat, Apisit; Zimmerman, Jeffrey J
2011-08-05
The median infectious dose (ID(50)) of porcine reproductive and respiratory syndrome (PRRS) virus isolate MN-184 was determined for aerosol exposure. In 7 replicates, 3-week-old pigs (n=58) respired 10l of airborne PRRS virus from a dynamic aerosol toroid (DAT) maintained at -4°C. Thereafter, pigs were housed in isolation and monitored for evidence of infection. Infection occurred at virus concentrations too low to quantify by microinfectivity assays. Therefore, exposure dose was determined using two indirect methods ("calculated" and "theoretical"). "Calculated" virus dose was derived from the concentration of rhodamine B monitored over the exposure sequence. "Theoretical" virus dose was based on the continuous stirred-tank reactor model. The ID(50) estimate was modeled on the proportion of pigs that became infected using the probit and logit link functions for both "calculated" and "theoretical" exposure doses. Based on "calculated" doses, the probit and logit ID(50) estimates were 1 × 10(-0.13)TCID(50) and 1 × 10(-0.14)TCID(50), respectively. Based on "theoretical" doses, the probit and logit ID(50) were 1 × 10(0.26)TCID(50) and 1 × 10(0.24)TCID(50), respectively. For each point estimate, the 95% confidence interval included the other three point estimates. The results indicated that MN-184 was far more infectious than PRRS virus isolate VR-2332, the only other PRRS virus isolate for which ID(50) has been estimated for airborne exposure. Since aerosol ID(50) estimates are available for only these two isolates, it is uncertain whether one or both of these isolates represent the normal range of PRRS virus infectivity by this route. Copyright © 2011 Elsevier B.V. All rights reserved.
The Construction of Moral Rationality.
ERIC Educational Resources Information Center
Moshman, D.
1995-01-01
Offers a theoretical account of moral rationality within a rational constructivist paradigm examining the nature and relationship of rationality and reasoning. Suggests progressive changes through developmental levels of moral rationality. Proposes a developmental moral epistemology that accommodates moral pluralism to a greater degree than does…
CTTRANSIT Operates New England's First Fuel Cell Hybrid Bus
DOT National Transportation Integrated Search
2018-02-01
The purpose of the Impact Assessment Plan is to take the results of the test track or field tests of the prototype, make reasonable extrapolations of those results to a theoretical full scale implementation, and answer the following 7 questions relat...
NASA Astrophysics Data System (ADS)
Athy, Jeremy; Friedrich, Jeff; Delany, Eileen
2008-05-01
Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.
Self-estimates of intelligence: a study in two African countries.
Furnham, Adrian; Callahan, Ines; Akande, Debo
2004-05-01
Black and White South Africans (n = 181) and Nigerians (n = 135) completed a questionnaire concerning the estimations of their own and their relatives' (father, mother, sister, brother) multiple intelligences as well as beliefs about the IQ concept. In contrast to previous results (A. Furnham, 2001), there were few gender differences in self-estimates. In a comparison of Black and White South Africans, it was clear the Whites gave higher estimates for self, parents, and brothers. However, overall IQ estimates for self and all relatives hovered around the mean of 100. When Black South Africans and Nigerians were compared, there were both gender and nationality differences on the self-estimates with men giving higher self-estimates than women and Nigerians higher self-estimates than South Africans. There were also gender and nationality differences in the answers to questions about IQ. The authors discuss possible reasons for the relatively few gender differences in this study compared with other studies as well as possible reasons for the cross-cultural difference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devaraj, Arun; Prabhakaran, Ramprashad; Joshi, Vineet V.
2016-04-12
The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U-10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U-10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U-10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Exercise-based treatments for substance use disorders: evidence, theory, and practicality
Linke, Sarah E.; Ussher, Michael
2016-01-01
Background Epidemiological studies reveal that individuals who report risky substance use are generally less likely to meet physical activity guidelines (with the exception of certain population segments, such as adolescents and athletes). A growing body of evidence suggests that individuals with substance use disorders (SUDs) are interested in exercising and that they may derive benefits from regular exercise, in terms of both general health/fitness and SUD recovery. Objectives The aims of this paper were to: (i) summarize the research examining the effects of exercise-based treatments for SUDs; (ii) discuss the theoretical mechanisms and practical reasons for investigating this topic; (iii) identify the outstanding relevant research questions that warrant further inquiry; and (iv) describe potential implications for practice. Methods The following databases were searched for peer-reviewed original and review papers on the topic of substance use and exercise: PubMed Central, MEDLINE, EMBASE, PsycINFO, and CINAHL Plus. Reference lists of these publications were subsequently searched for any missed but relevant manuscripts. Identified papers were reviewed and summarized by both authors. Results The limited research conducted suggests that exercise may be an effective adjunctive treatment for SUDs. In contrast to the scarce intervention trials to date, a relative abundance of literature on the theoretical and practical reasons supporting the investigation of this topic has been published. Conclusions Definitive conclusions are difficult to draw due to diverse study protocols and low adherence to exercise programs, among other problems. Despite the currently limited and inconsistent evidence, numerous theoretical and practical reasons support exercise-based treatments for SUDs, including psychological, behavioral, neurobiological, nearly universal safety profile, and overall positive health effects. PMID:25397661
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Quintana, Chris; Lea, Robert
1991-01-01
Fuzzy control has been successfully applied in industrial systems. However, there is some caution in using it. The reason is that it is based on quite reasonable ideas, but each of these ideas can be implemented in several different ways, and depending on which of the implementations chosen different results are achieved. Some implementations lead to a high quality control, some of them not. And since there are no theoretical methods for choosing the implementation, the basic way to choose it now is experimental. But if one chooses a method that is good for several examples, there is no guarantee that it will work fine in all of them. Hence the caution. A theoretical basis for choosing the fuzzy control procedures is provided. In order to choose a procedure that transforms a fuzzy knowledge into a control, one needs, first, to choose a membership function for each of the fuzzy terms that the experts use, second, to choose operations of uncertainty values that corresponds to 'and' and 'or', and third, when a membership function for control is obtained, one must defuzzy it, that is, somehow generate a value of the control u that will be actually used. A general approach that will help to make all these choices is described: namely, it is proved that under reasonable assumptions membership functions should be linear or fractionally linear, defuzzification must be described by a centroid rule and describe all possible 'and' and 'or' operations. Thus, a theoretical explanation of the existing semi-heuristic choices is given and the basis for the further research on optimal fuzzy control is formulated.
Development of theory-based health messages: three-phase programme of formative research
Epton, Tracy; Norman, Paul; Harris, Peter; Webb, Thomas; Snowsill, F. Alexandra; Sheeran, Paschal
2015-01-01
Online health behaviour interventions have great potential but their effectiveness may be hindered by a lack of formative and theoretical work. This paper describes the process of formative research to develop theoretically and empirically based health messages that are culturally relevant and can be used in an online intervention to promote healthy lifestyle behaviours among new university students. Drawing on the Theory of Planned Behaviour, a three-phase programme of formative research was conducted with prospective and current undergraduate students to identify (i) modal salient beliefs (the most commonly held beliefs) about fruit and vegetable intake, physical activity, binge drinking and smoking, (ii) which beliefs predicted intentions/behaviour and (iii) reasons underlying each of the beliefs that could be targeted in health messages. Phase 1, conducted with 96 pre-university college students, elicited 56 beliefs about the behaviours. Phase 2, conducted with 3026 incoming university students, identified 32 of these beliefs that predicted intentions/behaviour. Phase 3, conducted with 627 current university students, elicited 102 reasons underlying the 32 beliefs to be used to construct health messages to bolster or challenge these beliefs. The three-phase programme of formative research provides researchers with an example of how to develop health messages with a strong theoretical- and empirical base for use in health behaviour change interventions. PMID:24504361
Head, Katharine J; Noar, Seth M
2014-01-01
This paper explores the question: what are barriers to health behaviour theory development and modification, and what potential solutions can be proposed? Using the reasoned action approach (RAA) as a case study, four areas of theory development were examined: (1) the theoretical domain of a theory; (2) tension between generalisability and utility, (3) criteria for adding/removing variables in a theory, and (4) organisational tracking of theoretical developments and formal changes to theory. Based on a discussion of these four issues, recommendations for theory development are presented, including: (1) the theoretical domain for theories such as RAA should be clarified; (2) when there is tension between generalisability and utility, utility should be given preference given the applied nature of the health behaviour field; (3) variables should be formally removed/amended/added to a theory based on their performance across multiple studies and (4) organisations and researchers with a stake in particular health areas may be best suited for tracking the literature on behaviour-specific theories and making refinements to theory, based on a consensus approach. Overall, enhancing research in this area can provide important insights for more accurately understanding health behaviours and thus producing work that leads to more effective health behaviour change interventions.
Doing what's right: A grounded theory of ethical decision-making in occupational therapy.
VanderKaay, Sandra; Letts, Lori; Jung, Bonny; Moll, Sandra E
2018-04-20
Ethical decision-making is an important aspect of reasoning in occupational therapy practice. However, the process of ethical decision-making within the broader context of reasoning is yet to be clearly explicated. The purpose of this study was to advance a theoretical understanding of the process by which occupational therapists make ethical decisions in day-to-day practice. A constructivist grounded theory approach was adopted, incorporating in-depth semi-structured interviews with 18 occupational therapists from a range of practice settings and years of experience. Initially, participants nominated as key informants who were able to reflect on their decision-making processes were recruited. Theoretical sampling informed subsequent stages of data collection. Participants were asked to describe their process of ethical decision-making using scenarios from clinical practice. Interview transcripts were analyzed using a systematic process of initial then focused coding, and theoretical categorization to construct a theory regarding the process of ethical decision-making. An ethical decision-making prism was developed to capture three main processes: Considering the Fundamental Checklist, Consulting Others, and Doing What's Right. Ethical decision-making appeared to be an inductive and dialectical process with the occupational therapist at its core. Study findings advance our understanding of ethical decision-making in day-to-day clinical practice.
NASA Astrophysics Data System (ADS)
Yonata, B.; Nasrudin, H.
2018-01-01
A worksheet has to be a set with activity which is help students to arrange their own experiments. For this reason, this research is focused on how to train students’ higher order thinking skills in laboratory activity by developing laboratory activity worksheet on surface chemistry lecture. To ensure that the laboratory activity worksheet already contains aspects of the higher order thinking skill, it requires theoretical and empirical validation. From the data analysis results, it shows that the developed worksheet worth to use. The worksheet is worthy of theoretical and empirical feasibility. This conclusion is based on the findings: 1) Assessment from the validators about the theoretical feasibility aspects in the category is very feasible with an assessment range of 95.24% to 97.92%. 2) students’ higher thinking skill from N Gain values ranges from 0.50 (enough) to 1.00 (high) so it can be concluded that the laboratory activity worksheet on surface chemistry lecture is empirical in terms of worth. The empirical feasibility is supported by the responses of the students in very reasonable categories. It is expected that the laboratory activity worksheet on surface chemistry lecture can train students’ high order thinking skills for students who program surface chemistry lecture.
Automatic and controlled components of judgment and decision making.
Ferreira, Mario B; Garcia-Marques, Leonel; Sherman, Steven J; Sherman, Jeffrey W
2006-11-01
The categorization of inductive reasoning into largely automatic processes (heuristic reasoning) and controlled analytical processes (rule-based reasoning) put forward by dual-process approaches of judgment under uncertainty (e.g., K. E. Stanovich & R. F. West, 2000) has been primarily a matter of assumption with a scarcity of direct empirical findings supporting it. The present authors use the process dissociation procedure (L. L. Jacoby, 1991) to provide convergent evidence validating a dual-process perspective to judgment under uncertainty based on the independent contributions of heuristic and rule-based reasoning. Process dissociations based on experimental manipulation of variables were derived from the most relevant theoretical properties typically used to contrast the two forms of reasoning. These include processing goals (Experiment 1), cognitive resources (Experiment 2), priming (Experiment 3), and formal training (Experiment 4); the results consistently support the author's perspective. They conclude that judgment under uncertainty is neither an automatic nor a controlled process but that it reflects both processes, with each making independent contributions.
A concise guide to clinical reasoning.
Daly, Patrick
2018-04-30
What constitutes clinical reasoning is a disputed subject regarding the processes underlying accurate diagnosis, the importance of patient-specific versus population-based data, and the relation between virtue and expertise in clinical practice. In this paper, I present a model of clinical reasoning that identifies and integrates the processes of diagnosis, prognosis, and therapeutic decision making. The model is based on the generalized empirical method of Bernard Lonergan, which approaches inquiry with equal attention to the subject who investigates and the object under investigation. After identifying the structured operations of knowing and doing and relating these to a self-correcting cycle of learning, I correlate levels of inquiry regarding what-is-going-on and what-to-do to the practical and theoretical elements of clinical reasoning. I conclude that this model provides a methodical way to study questions regarding the operations of clinical reasoning as well as what constitute significant clinical data, clinical expertise, and virtuous health care practice. © 2018 John Wiley & Sons, Ltd.
Experimental and theoretical characterization of an AC electroosmotic micromixer.
Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo
2010-01-01
We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Estimation of Post-Test Probabilities by Residents: Bayesian Reasoning versus Heuristics?
ERIC Educational Resources Information Center
Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P.; Ghali, William; Wright, Bruce; McLaughlin, Kevin
2014-01-01
Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of…
ERIC Educational Resources Information Center
Kikas, Eve; Peets, Katlin; Tropp, Kristiina; Hinn, Maris
2009-01-01
The purpose of the present study was to examine the impact of sex, verbal reasoning, and normative beliefs on direct and indirect forms of aggression. Three scales from the Peer Estimated Conflict Behavior Questionnaire, Verbal Reasoning tests, and an extended version of Normative Beliefs About Aggression Scale were administered to 663 Estonian…
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
Analysis of the Cape Cod tracer data
Ezzedine, Souheil; Rubin, Yoram
1997-01-01
An analysis of the Cape Cod test was performed using several first- and higher-order theoretical models. We compare conditional and unconditional solutions of the transport equation and employ them for analysis of the experimental data. We consider spatial moments, mass breakthrough curves, and the distribution of the solute mass in space. The concentration measurements were also analyzed using theoretical models for the expected value and variance of concentration. The theoretical models we employed are based on the spatial correlation structure of the conductivity field, without any fitting of parameters to the tracer data, and hence we can test the predictive power of the theories tested. The effects of recharge on macrodispersion are investigated, and it is shown that recharge provides a reasonable explanation for the enhanced lateral spread of the Cape Cod plume. The compendium of the experimental results presented here is useful for testing of theoretical and numerical models.
Integrating Formal and Grounded Representations in Combinatorics Learning
ERIC Educational Resources Information Center
Braithwaite, David W.; Goldstone, Robert L.
2013-01-01
The terms "concreteness fading" and "progressive formalization" have been used to describe instructional approaches to science and mathematics that use grounded representations to introduce concepts and later transition to more formal representations of the same concepts. There are both theoretical and empirical reasons to…
THE USE OF ELECTRONIC DATA PROCESSING IN CORRECTIONS AND LAW ENFORCEMENT,
Reviews the reasons, methods, accomplishments and goals of the use of electronic data processing in the fields of correction and law enforcement . Suggest...statistical and case history data in building a sounder theoretical base in the field of law enforcement . (Author)
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
A Novel Methodology for Measurements of an LED's Heat Dissipation Factor
NASA Astrophysics Data System (ADS)
Jou, R.-Y.; Haung, J.-H.
2015-12-01
Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.
NASA Astrophysics Data System (ADS)
Shen, Ji; Sung, Shannon; Zhang, Dongmei
2015-11-01
Students need to think and work across disciplinary boundaries in the twenty-first century. However, it is unclear what interdisciplinary thinking means and how to analyze interdisciplinary interactions in teamwork. In this paper, drawing on multiple theoretical perspectives and empirical analysis of discourse contents, we formulate a theoretical framework that helps analyze interdisciplinary reasoning and communication (IRC) processes in interdisciplinary collaboration. Specifically, we propose four interrelated IRC processes-integration, translation, transfer, and transformation, and develop a corresponding analytic framework. We apply the framework to analyze two meetings of a project that aims to develop interdisciplinary science assessment items. The results illustrate that the framework can help interpret the interdisciplinary meeting dynamics and patterns. Our coding process and results also suggest that these IRC processes can be further examined in terms of interconnected sub-processes. We also discuss the implications of using the framework in conceptualizing, practicing, and researching interdisciplinary learning and teaching in science education.
Memory, reasoning, and categorization: parallels and common mechanisms
Hayes, Brett K.; Heit, Evan; Rotello, Caren M.
2014-01-01
Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks. PMID:24987380
Memory, reasoning, and categorization: parallels and common mechanisms.
Hayes, Brett K; Heit, Evan; Rotello, Caren M
2014-01-01
Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks.
Teaching and Assessing Clinical Reasoning Skills.
Modi, Jyoti Nath; Anshu; Gupta, Piyush; Singh, Tejinder
2015-09-01
Clinical reasoning is a core competency expected to be acquired by all clinicians. It is the ability to integrate and apply different types of knowledge, weigh evidence critically and reflect upon the process used to arrive at a diagnosis. Problems with clinical reasoning often occur because of inadequate knowledge, flaws in data gathering and improper approach to information processing. Some of the educational strategies which can be used to encourage acquisition of clinical reasoning skills are: exposure to a wide variety of clinical cases, activation of previous knowledge, development of illness scripts, sharing expert strategies to arrive at a diagnosis, forcing students to prioritize differential diagnoses; and encouraging reflection, metacognition, deliberate practice and availability of formative feedback. Assessment of clinical reasoning abilities should be done throughout the training course in diverse settings. Use of scenario based multiple choice questions, key feature test and script concordance test are some ways of theoretically assessing clinical reasoning ability. In the clinical setting, these skills can be tested in most forms of workplace based assessment. We recommend that clinical reasoning must be taught at all levels of medical training as it improves clinician performance and reduces cognitive errors.
NASA Astrophysics Data System (ADS)
Rosita, N. T.
2018-03-01
The purpose of this study is to analyse algebraic reasoning ability using the SOLO model as a theoretical framework to assess students’ algebraic reasoning abilities of Field Dependent cognitive (FD), Field Independent (FI) and Gender perspectives. The method of this study is a qualitative research. The instrument of this study is the researcher himself assisted with algebraic reasoning tests, the problems have been designed based on NCTM indicators and algebraic reasoning according to SOLO model. While the cognitive style of students is determined using Group Embedded Figure Test (GEFT), as well as interviews on the subject as triangulation. The subjects are 15 female and 15 males of the sixth semester students of mathematics education, STKIP Sebelas April. The results of the qualitative data analysis is that most subjects are at the level of unistructural and multi-structural, subjects at the relational level have difficulty in forming a new linear pattern. While the subjects at the extended abstract level are able to meet all the indicators of algebraic reasoning ability even though some of the answers are not perfect yet. Subjects of FI tend to have higher algebraic reasoning abilities than of the subject of FD.
The role of electronic health records in clinical reasoning.
Berndt, Markus; Fischer, Martin R
2018-05-16
Electronic health records (eHRs) play an increasingly important role in documentation and exchange of information in multi-and interdisciplinary patient care. Although eHRs are associated with mixed evidence in terms of effectiveness, they are undeniably the health record form of the future. This poses several learning opportunities and challenges for medical education. This review aims to connect the concept of eHRs to key competencies of physicians and elaborates current learning science perspectives on diagnostic and clinical reasoning based on a theoretical framework of scientific reasoning and argumentation. It concludes with an integrative vision of the use of eHRs, and the special role of the patient, for teaching and learning in medicine. © 2018 New York Academy of Sciences.
Constraint-based Attribute and Interval Planning
NASA Technical Reports Server (NTRS)
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
The nonstationary strain filter in elastography: Part I. Frequency dependent attenuation.
Varghese, T; Ophir, J
1997-01-01
The accuracy and precision of the strain estimates in elastography depend on a myriad number of factors. A clear understanding of the various factors (noise sources) that plague strain estimation is essential to obtain quality elastograms. The nonstationary variation in the performance of the strain filter due to frequency-dependent attenuation and lateral and elevational signal decorrelation are analyzed in this and the companion paper for the cross-correlation-based strain estimator. In this paper, we focus on the role of frequency-dependent attenuation in the performance of the strain estimator. The reduction in the signal-to-noise ratio (SNRs) in the RF signal, and the center frequency and bandwidth downshift with frequency-dependent attenuation are incorporated into the strain filter formulation. Both linear and nonlinear frequency dependence of attenuation are theoretically analyzed. Monte-Carlo simulations are used to corroborate the theoretically predicted results. Experimental results illustrate the deterioration in the precision of the strain estimates with depth in a uniformly elastic phantom. Theoretical, simulation and experimental results indicate the importance of high SNRs values in the RF signals, because the strain estimation sensitivity, elastographic SNRe and dynamic range deteriorate rapidly with a decrease in the SNRs. In addition, a shift in the strain filter toward higher strains is observed at large depths in tissue due to the center frequency downshift.
Sex differences in moral reasoning: response to Walker's (1984) conclusion that there are none.
Baumrind, D
1986-04-01
Data from the Family Socialization and Developmental Competence Project are used to probe Walker's conclusion that there are no sex differences in moral reasoning. Ordinal and nominal nonparametric statistics result in a complex but theoretically meaningful network of relationships among sex, educational level, and Kohlberg stage score level, with the presence and direction of sex differences in stage score level dependent on educational level. The effects on stage score level of educational level and working status are also shown to differ for men and women. Reasons are considered for not accepting Walker's dismissal of studies that use (a) a pre-1983 scoring manual, or (b) fail to control for education. The problems presented to Kohlberg's theory by the significant relationship between educational and stage score levels in the general population are discussed, particularly as these apply to the postconventional level of moral reasoning.
Early executive function predicts reasoning development.
Richland, Lindsey E; Burchinal, Margaret R
2013-01-01
Analogical reasoning is a core cognitive skill that distinguishes humans from all other species and contributes to general fluid intelligence, creativity, and adaptive learning capacities. Yet its origins are not well understood. In the study reported here, we analyzed large-scale longitudinal data from the Study of Early Child Care and Youth Development to test predictors of growth in analogical-reasoning skill from third grade to adolescence. Our results suggest an integrative resolution to the theoretical debate regarding contributory factors arising from smaller-scale, cross-sectional experiments on analogy development. Children with greater executive-function skills (both composite and inhibitory control) and vocabulary knowledge in early elementary school displayed higher scores on a verbal analogies task at age 15 years, even after adjusting for key covariates. We posit that knowledge is a prerequisite to analogy performance, but strong executive-functioning resources during early childhood are related to long-term gains in fundamental reasoning skills.
Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi
2015-04-01
The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
Bayesian flood forecasting methods: A review
NASA Astrophysics Data System (ADS)
Han, Shasha; Coulibaly, Paulin
2017-08-01
Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
Hou, Tian-Xing; Yang, Xing-Guo; Xing, Hui-Ge; Huang, Kang-Xin; Zhou, Jia-Wen
2016-01-01
Estimating groundwater inflow into a tunnel before and during the excavation process is an important task to ensure the safety and schedule during the underground construction process. Here we report a case of the forecasting and prevention of water inrush at the Jinping II Hydropower Station diversion tunnel groups during the excavation process. The diversion tunnel groups are located in mountains and valleys, and with high water pressure head. Three forecasting methods are used to predict the total water inflow of the #2 diversion tunnel. Furthermore, based on the accurate estimation of the water inrush around the tunnel working area, a theoretical method is presented to forecast the water inflow at the working area during the excavation process. The simulated results show that the total water flow is 1586.9, 1309.4 and 2070.2 m(3)/h using the Qshima method, Kostyakov method and Ochiai method, respectively. The Qshima method is the best one because it most closely matches the monitoring result. According to the huge water inflow into the #2 diversion tunnel, reasonable drainage measures are arranged to prevent the potential disaster of water inrush. The groundwater pressure head can be determined using the water flow velocity from the advancing holes; then, the groundwater pressure head can be used to predict the possible water inflow. The simulated results show that the groundwater pressure head and water inflow re stable and relatively small around the region of the intact rock mass, but there is a sudden change around the fault region with a large water inflow and groundwater pressure head. Different countermeasures are adopted to prevent water inrush disasters during the tunnel excavation process. Reasonable forecasting the characteristic parameters of water inrush is very useful for the formation of prevention and mitigation schemes during the tunnel excavation process.
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
NASA Astrophysics Data System (ADS)
Trifonova, Tatiana; Arakelian, Sergei; Trifonov, Dmitriy; Abrakhin, Sergei
2017-04-01
1. The principal goal of present talk is, to discuss the existing uncertainty and discrepancy between water balance estimation for the area under heavy rain flood, on the one hand from the theoretical approach and reasonable data base due to rainfall going from atmosphere and, on the other hand the real practicle surface water flow parameters measured by some methods and/or fixed by some eye-witness (cf. [1]). The vital item for our discussion is that the last characteristics sometimes may be noticeably grater than the first ones. Our estimations show the grater water mass discharge observation during the events than it could be expected from the rainfall process estimation only [2]. The fact gives us the founding to take into account the groundwater possible contribution to the event. 2. We carried out such analysis, at least, for two catastrophic water events in 2015, i.e. (1) torrential rain and catastrophic floods in Lousiana (USA), June 16-20; (2) Assam flood (India), Aug. 22 - Sept. 8. 3. Groundwater flood of a river terrace discussed e.g. in [3] but in respect when rise of the water table above the land surface occurs coincided with intense rainfall and being as a relatively rare phenomenon. In our hypothesis the principal part of possible groundwater exit to surface is connected with a crack-net system state in earth-crust (including deep layers) as a water transportation system, first, being in variated pressure field for groundwater basin and, second, modified by different reasons ( both suddenly (the Krimsk-city flash flood event, July 2012, Russia) and/or smoothly (the Amur river flood event, Aug.-Sept. 2013, Russia) ). Such reconstruction of 3D crack-net under external reasons (resulting even in local variation of pressures in any crack-section) is a principal item for presented approach. 4. We believe that in some cases the interconnection of floods and preceding earthquakes may occur. The problem discuss by us for certain events ( e.g. in addition to these above events, for the 2013 Colorado flood (USA) ). 5. Thus, we believe that now is the time to have the transition from «surface view» - i.e. observable results by eye-witness and consequences of the water events, to «fundamental approach» - i.e. measured physical parameters during the continuous monitoring and possible mechanisms of their variation. References 1. Trifonova T.A., Akimov V.A., Abrakhin S.I., Arakelian S.M., Prokoshev V.G. Basic principles of modeling and forecasting of extreme natural and man-made disasters. Monograph, Russian Emercom Publ., 2014, - 436 p., Moscow. 2. Trifonova T., Trifonov D., Arakelian S. The 2015 disastrous floods in Assam, India, and Louisiana, USA: water balance estimation. Hydrology 2016, 3(4), 41; doi:10.3390/hydrology3040041. 3. Madeline B. Cotkowitz, John W. Attig, Thomas McDermott. Groundwater flood a river terrace in southwest Wisconsin, USA. Hydrogeology Journal. 2014. DOI 10.1007/s10040-014-1129-x.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
The gravitational properties of antimatter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldman, T.; Hughes, R.J.; Nieto, M.M.
1986-09-01
It is argued that a determination of the gravitational acceleration of antimatter towards the earth is capable of imposing powerful constraints on modern quantum gravity theories. Theoretical reasons to expect non-Newtonian non-Einsteinian effects of gravitational strength and experimental suggestions of such effects are reviewed. 41 refs. (LEW)
ERIC Educational Resources Information Center
Lenartowicz, Marta
2015-01-01
Higher education research frequently refers to the complex external conditions that give our old-fashioned universities a good reason to change. The underlying theoretical assumption of such framing is that organizations are open systems. This paper presents an alternative view, derived from the theory of social systems autopoiesis. It proposes…
Metaphorical Language: Seeing and Hearing with the Heart.
ERIC Educational Resources Information Center
Wilkins, Lois E.
2002-01-01
Focuses on a multidisciplinary metatheory relevant for the poetry therapist that speaks to holographic reasoning and metaphorical poetic language. Shows how metaphorical poetic language supports both the therapist and the client. Concludes that the poetry therapist will benefit from this theoretical framework that encourages communications across…
Techtalk: Mobile Apps and College Mathematics
ERIC Educational Resources Information Center
Hoang, Theresa V.; Caverly, David C.
2013-01-01
In this column, the authors discuss apps useful in developing mathematical reasoning. They place these into a theoretical framework, suggesting how they could be used in an instructional model such as the Algorithmic Instructional Technique (AIT) developed by Vasquez (2003). This model includes four stages: modeling, practice, transition, and…
Active Learning in the Digital Age Classroom.
ERIC Educational Resources Information Center
Heide, Ann; Henderson, Dale
This book examines the theoretical and practical issues surrounding today's technology-integrated classroom. The chapters cover the following topics: (1) reasons to integrate technology into the classroom, including the changing world, enriched learning and increased productivity, the learner, the workplace, past experience, and future trends; (2)…
The Dearth of Mental Health Research in Occupational Therapy.
ERIC Educational Resources Information Center
Gibson, Diane
1984-01-01
Reasons for the lack of research in occupational therapy include small numbers of doctoral level occupational therapists, the psychobehavioral/biochemical dichotomy, the lack of a theoretical framework, the level of research instruction, the impact of a predominantly female profession, and the attitudes of institutions. (SK)
NASA Astrophysics Data System (ADS)
Garel, F.; Kaminski, E.; Tait, S.; Limare, A.
2010-12-01
A quantitative monitoring of lava flow is required to manage a volcanic crisis, in order to assess where the flow will go, and when will it stop. As the spreading of lava flows is mainly controlled by its rheology and the eruptive mass flux, the key question is how to evaluate them during the eruption (rather than afterwards.) A relationship between the lava flow temperature and the eruption rate is likely to exist, based on the first-order argument that higher eruption rates should correspond to larger energy radiated by a lava flow. The semi-empirical formula developed by Harris and co-workers (e.g. Harris et al., 2007) is used to estimate lava flow rate from satellite observations. However, the complete theoretical bases of this technique, especially its domain of validity, remain to be firmly established. Here we propose a theoretical study of the cooling of a viscous axisymmetric gravity current fed at constant flux rate to investigate whether or not this approach can and/or should be refined and/or modify to better assess flow rates. Our study focuses on the influence of boundary conditions at the surface of the flow, where cooling can occur both by radiation and convection, and at the base of the flow. Dimensionless numbers are introduced to quantify the relative interplay between the model parameters, such as the lava flow rate and the efficiency of the various cooling processes (conduction, convection, radiation.) We obtain that the thermal evolution of the flow can be described as a two-stage evolution. After a transient phase of dynamic cooling, the flow reaches a steady state, characterized by a balance between surface and base cooling and heat advection in the flow, in which the surface temperature structure is constant. The duration of the transient phase and the radiated energy in the steady regime are shown to be a function of the dimensionless numbers. In the case of lava flows, we obtain that the steady state regime is reached after a few days. In this regime, a thermal image provides a consistent estimate of the flow rate if the external cooling conditions are reasonably well constrained.
Agent Reasoning Transparency: The Influence of Information Level on Automation Induced Complacency
2017-06-01
ARL-TR-8044 ● JUNE 2017 US Army Research Laboratory Agent Reasoning Transparency: The Influence of Information Level on...US Army Research Laboratory Agent Reasoning Transparency: The Influence of Information Level on Automation-Induced Complacency by Julia...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Critical Nucleation Length for Accelerating Frictional Slip
NASA Astrophysics Data System (ADS)
Aldam, Michael; Weikamp, Marc; Spatschek, Robert; Brener, Efim A.; Bouchbinder, Eran
2017-11-01
The spontaneous nucleation of accelerating slip along slowly driven frictional interfaces is central to a broad range of geophysical, physical, and engineering systems, with particularly far-reaching implications for earthquake physics. A common approach to this problem associates nucleation with an instability of an expanding creep patch upon surpassing a critical length Lc. The critical nucleation length Lc is conventionally obtained from a spring-block linear stability analysis extended to interfaces separating elastically deformable bodies using model-dependent fracture mechanics estimates. We propose an alternative approach in which the critical nucleation length is obtained from a related linear stability analysis of homogeneous sliding along interfaces separating elastically deformable bodies. For elastically identical half-spaces and rate-and-state friction, the two approaches are shown to yield Lc that features the same scaling structure, but with substantially different numerical prefactors, resulting in a significantly larger Lc in our approach. The proposed approach is also shown to be naturally applicable to finite-size systems and bimaterial interfaces, for which various analytic results are derived. To quantitatively test the proposed approach, we performed inertial Finite-Element-Method calculations for a finite-size two-dimensional elastically deformable body in rate-and-state frictional contact with a rigid body under sideway loading. We show that the theoretically predicted Lc and its finite-size dependence are in reasonably good quantitative agreement with the full numerical solutions, lending support to the proposed approach. These results offer a theoretical framework for predicting rapid slip nucleation along frictional interfaces.
The mass dependence of dark matter halo alignments with large-scale structure
NASA Astrophysics Data System (ADS)
Piras, Davide; Joachimi, Benjamin; Schäfer, Björn Malte; Bonamigo, Mario; Hilbert, Stefan; van Uitert, Edo
2018-02-01
Tidal gravitational forces can modify the shape of galaxies and clusters of galaxies, thus correlating their orientation with the surrounding matter density field. We study the dependence of this phenomenon, known as intrinsic alignment (IA), on the mass of the dark matter haloes that host these bright structures, analysing the Millennium and Millennium-XXL N-body simulations. We closely follow the observational approach, measuring the halo position-halo shape alignment and subsequently dividing out the dependence on halo bias. We derive a theoretical scaling of the IA amplitude with mass in a dark matter universe, and predict a power law with slope βM in the range 1/3 to 1/2, depending on mass scale. We find that the simulation data agree with each other and with the theoretical prediction remarkably well over three orders of magnitude in mass, with the joint analysis yielding an estimate of β M = 0.36^{+0.01}_{-0.01}. This result does not depend on redshift or on the details of the halo shape measurement. The analysis is repeated on observational data, obtaining a significantly higher value, β M = 0.56^{+0.05}_{-0.05}. There are also small but significant deviations from our simple model in the simulation signals at both the high- and low-mass end. We discuss possible reasons for these discrepancies, and argue that they can be attributed to physical processes not captured in the model or in the dark matter-only simulations.
Model-based optical coherence elastography using acoustic radiation force
NASA Astrophysics Data System (ADS)
Aglyamov, Salavat; Wang, Shang; Karpiouk, Andrei; Li, Jiasong; Emelianov, Stanislav; Larin, Kirill V.
2014-02-01
Acoustic Radiation Force (ARF) stimulation is actively used in ultrasound elastography to estimate mechanical properties of tissue. Compared with ultrasound imaging, OCT provides advantage in both spatial resolution and signal-to-noise ratio. Therefore, a combination of ARF and OCT technologies can provide a unique opportunity to measure viscoelastic properties of tissue, especially when the use of high intensity radiation pressure is limited for safety reasons. In this presentation we discuss a newly developed theoretical model of the deformation of a layered viscoelastic medium in response to an acoustic radiation force of short duration. An acoustic impulse was considered as an axisymmetric force generated on the upper surface of the medium. An analytical solution of this problem was obtained using the Hankel transform in frequency domain. It was demonstrated that layers at different depths introduce different frequency responses. To verify the developed model, experiments were performed using tissue-simulating, inhomogeneous phantoms of varying mechanical properties. The Young's modulus of the phantoms was varied from 5 to 50 kPa. A single-element focused ultrasound transducer (3.5 MHz) was used to apply the radiation force with various durations on the surface of phantoms. Displacements on the phantom surface were measured using a phase-sensitive OCT at 25 kHz repetition frequency. The experimental results were in good agreement with the modeling results. Therefore, the proposed theoretical model can be used to reconstruct the mechanical properties of tissue based on ARF/OCT measurements.
Negligent exposures to hand-transmitted vibration.
Griffin, Michael J
2008-04-01
If the negligence of an employer results in the disability in an employee, the employer is responsible, in whole or in part, for the disability. The employer is wholly responsible when the worker would not have developed the disability if the employer had taken all reasonable preventative measures. The employer is only partly responsible if the worker would probably have developed some disability even if the employer had taken all reasonable precautions. The employer's responsibility may be estimated from the difference between the actual disability of the worker and the disability that the worker would have suffered if the employer had taken all reasonable preventative measures. This paper considers alternative ways of apportioning negligent and non-negligent exposures to hand-transmitted vibration. The equivalent daily vibration exposure, A(8), used in current EU Directives is shown to be unsuitable for distinguishing between the consequences of negligent and non-negligent exposures because the risks of developing a disorder from hand-transmitted vibration also depend on the years of exposure. Furthermore, daily exposures take no account of individual susceptibility or the practicality of reducing exposure. The consequences of employer negligence may be estimated from the delay in the onset and progression of disorder that would have been achieved if the employer had acted reasonably, such as by reducing vibration magnitude and exposure duration to the minimum that was reasonably achievable in the circumstances. This seems to be fair and reasonable for both employers and employees and indicates the consequences of negligence-the period of the worker's life with disease as a result of negligence and the period for which their employment opportunities may be restricted as a result of the onset of the disorder due to negligence. The effects of negligence may be estimated from the delay in the onset of disease or disability that would have occurred if the employer had behaved reasonably. This definition of negligence encourages employers to reduce risks to the lowest reasonably practical level, consistent with EU Directives.
Estimating equations estimates of trends
Link, W.A.; Sauer, J.R.
1994-01-01
The North American Breeding Bird Survey monitors changes in bird populations through time using annual counts at fixed survey sites. The usual method of estimating trends has been to use the logarithm of the counts in a regression analysis. It is contended that this procedure is reasonably satisfactory for more abundant species, but produces biased estimates for less abundant species. An alternative estimation procedure based on estimating equations is presented.
Green, Adam E; Kenworthy, Lauren; Gallagher, Natalie M; Antezana, Ligia; Mosner, Maya G; Krieg, Samantha; Dudley, Katherina; Ratto, Allison; Yerys, Benjamin E
2017-05-01
Analogical reasoning is an important mechanism for social cognition in typically developing children, and recent evidence suggests that some forms of analogical reasoning may be preserved in autism spectrum disorder. An unanswered question is whether children with autism spectrum disorder can apply analogical reasoning to social information. In all, 92 children with autism spectrum disorder completed a social content analogical reasoning task presented via photographs of real-world social interactions. Autism spectrum disorder participants exhibited performance that was well above chance and was not significantly worse than age- and intelligence quotient-matched typically developing children. Investigating the relationship of social content analogical reasoning performance to age in this cross-sectional dataset indicated similar developmental trajectories in the autism spectrum disorder and typically developing children groups. These findings provide new support for intact analogical reasoning in autism spectrum disorder and have theoretical implications for analogy as a metacognitive skill that may be at least partially dissociable from general deficits in processing social content. As an initial study of social analogical reasoning in children with autism spectrum disorder, this study focused on a basic research question with limited ecological validity. Evidence that children with autism spectrum disorder can apply analogical reasoning ability to social content may have long-range applied implications for exploring how this capacity might be channeled to improve social cognition in daily life.
NASA Technical Reports Server (NTRS)
Yeh, Pat J.-F.; Swenson, S. C.; Famiglietti, J. S.; Rodell, M.
2007-01-01
Regional groundwater storage changes in Illinois are estimated from monthly GRACE total water storage change (TWSC) data and in situ measurements of soil moisture for the period 2002-2005. Groundwater storage change estimates are compared to those derived from the soil moisture and available well level data. The seasonal pattern and amplitude of GRACE-estimated groundwater storage changes track those of the in situ measurements reasonably well, although substantial differences exist in month-to-month variations. The seasonal cycle of GRACE TWSC agrees well with observations (correlation coefficient = 0.83), while the seasonal cycle of GRACE-based estimates of groundwater storage changes beneath 2 m depth agrees with observations with a correlation coefficient of 0.63. We conclude that the GRACE-based method of estimating monthly to seasonal groundwater storage changes performs reasonably well at the 200,000 sq km scale of Illinois.
A subagging regression method for estimating the qualitative and quantitative state of groundwater
NASA Astrophysics Data System (ADS)
Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.
Comparison of Estimates in the 1996 National Household Education Survey. Working Paper Series.
ERIC Educational Resources Information Center
Nolin, Mary Jo; Collins, Mary A.; Vaden-Kiernan, Nancy; Davies, Elizabeth
This report compares selected estimates from the 1996 National Household Education Survey (NHES:96) with estimates from previous NHES collections, the Current Population Survey (CPS), and other relevant data sources. The comparisons provide an indication of the reasonableness of selected NHES:96 estimates. Where discrepancies were found between…
NASA Astrophysics Data System (ADS)
Paradis, Charles J.; McKay, Larry D.; Perfect, Edmund; Istok, Jonathan D.; Hazen, Terry C.
2018-03-01
The analytical solution describing the one-dimensional displacement of the center of mass of a tracer during an injection, drift, and extraction test (push-pull test) was expanded to account for displacement during the injection phase. The solution was expanded to improve the in situ estimation of effective porosity. The truncated equation assumed displacement during the injection phase was negligible, which may theoretically lead to an underestimation of the true value of effective porosity. To experimentally compare the expanded and truncated equations, single-well push-pull tests were conducted across six test wells located in a shallow, unconfined aquifer comprised of unconsolidated and heterogeneous silty and clayey fill materials. The push-pull tests were conducted by injection of bromide tracer, followed by a non-pumping period, and subsequent extraction of groundwater. The values of effective porosity from the expanded equation (0.6-5.0%) were substantially greater than from the truncated equation (0.1-1.3%). The expanded and truncated equations were compared to data from previous push-pull studies in the literature and demonstrated that displacement during the injection phase may or may not be negligible, depending on the aquifer properties and the push-pull test parameters. The results presented here also demonstrated the spatial variability of effective porosity within a relatively small study site can be substantial, and the error-propagated uncertainty of effective porosity can be mitigated to a reasonable level (< ± 0.5%). The tests presented here are also the first that the authors are aware of that estimate, in situ, the effective porosity of fine-grained fill material.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
ERIC Educational Resources Information Center
Malcolm, Peter
2013-01-01
The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…
Theory-Guided Technology in Computer Science.
ERIC Educational Resources Information Center
Ben-Ari, Mordechai
2001-01-01
Examines the history of major achievements in computer science as portrayed by winners of the prestigious Turing award and identifies a possibly unique activity called Theory-Guided Technology (TGT). Researchers develop TGT by using theoretical results to create practical technology. Discusses reasons why TGT is practical in computer science and…
Handling Questions and Objections Affects Audience Judgments of Speakers
ERIC Educational Resources Information Center
Daly, John A.; Redlick, Madeleine H.
2016-01-01
Listeners evaluate well-delivered presentations more positively than those that are poorly delivered. In today's world, presenters often face challenging questions and objections from listeners during or after their presentations. Surprisingly, while there are a number of theoretical reasons to anticipate that how presenters respond to objections…
Urban development applications project. Urban technology transfer study
NASA Technical Reports Server (NTRS)
1975-01-01
Technology transfer is defined along with reasons for attempting to transfer technology. Topics discussed include theoretical models, stages of the innovation model, communication process model, behavior of industrial organizations, problem identification, technology search and match, establishment of a market mechanism, applications engineering, commercialization, and management of technology transfer.
ERIC Educational Resources Information Center
Ulijn, Jan M.; O'Duill, Micheal; Robertson, Stephen A.
2004-01-01
From personal relationships to complex business dealings, negotiations are essential forms of communication. But negotiation skills are often neglected in university courses. One reason for this neglect is the difficulty of teaching negotiations effectively. Such teaching requires both an underlying theoretical base and activities that provide…
Perry and Piaget: Theoretical Framework for Effective College Course Development.
ERIC Educational Resources Information Center
Mellon, Constance A.; Sass, Edmund
1981-01-01
Discusses the relationship between Piaget's theory of cognitive development and Perry's theory of intellectual and ethical development, and recommends a framework for their application in course design. Involving students in examining not only course content, but also their beliefs and reasoning patterns, is recommended as a route for improving…
ERIC Educational Resources Information Center
Ulrici, Donna; And Others
1981-01-01
Provides a model for categorizing marital and family skill training programs according to their theoretical orientation. Describes emotional, reasoning, and action approaches to intervention which allow counselors to examine the relationship between client characteristics and intervention approaches. (JAC)
Sexual Resourcefulness and the Impact of Family, Sex Education, Media and Peers
ERIC Educational Resources Information Center
Kennett, Deborah J.; Humphreys, Terry P.; Schultz, Kristen E.
2012-01-01
Building on a recently developed theoretical model of sexual self-control, 178 undergraduate women completed measures of learned resourcefulness, reasons for consenting to unwanted advances, and sexual self-efficacy--variables consistently shown to be unique predictors of sexual resourcefulness. Additional measures assessed in this investigation…
Working Memory Underpins Cognitive Development, Learning, and Education
ERIC Educational Resources Information Center
Cowan, Nelson
2014-01-01
Working memory is the retention of a small amount of information in a readily accessible form. It facilitates planning, comprehension, reasoning, and problem solving. I examine the historical roots and conceptual development of the concept and the theoretical and practical implications of current debates about working memory mechanisms. Then, I…
ERIC Educational Resources Information Center
Mabovula, Nonceba
2010-01-01
I apply as theoretical framework the Habermassian principles of "communicative action" and "consensus" through deliberation and reasoning. In particular, I focus on "rational" and "argumentative" communication through which school governance stakeholders could advance arguments and counter-arguments. I…
Teacher Educators and the Production of Bricoleurs: An Ethnographic Study.
ERIC Educational Resources Information Center
Hatton, Elizabeth
1997-01-01
Reports and discusses data from an ethnographic study of teacher educators in which a metaphor for teachers' work as bricolage generates a hypothesis about teacher education as a conservative determinant of teachers' work. Discusses the results and reasons why the theoretical adequacy of the bricolage explanation needs improvement. (DSK)
Fostering Cultural Diversity: Problems of Access and Ethnic Boundary Maintenance
Maria T. Allison
1992-01-01
This presentation explores theoretical reasons for the underutilization of services, discusses types and problems of access which may be both inadvertent and institutionalized, and discusses policy implications of this work. Data suggest that individuals from distinct ethnic populations, particularly Hispanic, African-American, and Native American, tend to underutilize...
The Antieconomy Hypothesis (Part 2): Theoretical Roots
ERIC Educational Resources Information Center
Vanderburg, Willem H.
2009-01-01
The hypothesis of an antieconomy developed in part 1 is incommensurate with mainstream economics. This article explores three reasons for this situation: the limits of discipline-based scholarship in general and of mainstream economics in particular, the status of economists in contemporary societies, and the failure of economists to accept any…
Values and Norms of Proof for Mathematicians and Students
ERIC Educational Resources Information Center
Dawkins, Paul Christian; Weber, Keith
2017-01-01
In this theoretical paper, we present a framework for conceptualizing proof in terms of mathematical values, as well as the norms that uphold those values. In particular, proofs adhere to the values of establishing a priori truth, employing decontextualized reasoning, increasing mathematical understanding, and maintaining consistent standards for…
Olson's "Cognitive Development": A Commentary.
ERIC Educational Resources Information Center
Follettie, Joseph F.
This report is a review of Olson's "Cognitive Development." Unlike a typical book review it does not compare and contrast the author's theoretical framework and methodological practices with those of others in the field, but rather it extensively describes and critiques the reported empirical work. The reasons given for this approach are that…
The Development of Multiplicative Reasoning in the Learning of Mathematics.
ERIC Educational Resources Information Center
Harel, Guershon, Ed.; Confrey, Jere, Ed.
This book is a compilation of recent research on the development of multiplicative concepts. The sections and chapters are: (1) Theoretical Approaches: "Children's Multiplying Schemes" (L. Steffe), "Multiplicative Conceptual Field: What and Why?" (G. Vergnaud), "Extending the Meaning of Multiplication and Division" (B. Greer); (2) The Role of the…
DOT National Transportation Integrated Search
2015-02-01
The purpose of the Impact Assessment Plan is to take the results of the test track or field tests of the prototype, make reasonable extrapolations of those results to a theoretical full scale implementation, and answer the following 7 questions relat...
Theoretical and Experimental Particle Velocity in Cold Spray
NASA Astrophysics Data System (ADS)
Champagne, Victor K.; Helfritch, Dennis J.; Dinavahi, Surya P. G.; Leyman, Phillip F.
2011-03-01
In an effort to corroborate theoretical and experimental techniques used for cold spray particle velocity analysis, two theoretical and one experimental methods were used to analyze the operation of a nozzle accelerating aluminum particles in nitrogen gas. Two-dimensional (2D) axi-symmetric computations of the flow through the nozzle were performed using the Reynolds averaged Navier-Stokes code in a computational fluid dynamics platform. 1D, isentropic, gas-dynamic equations were solved for the same nozzle geometry and initial conditions. Finally, the velocities of particles exiting a nozzle of the same geometry and operated at the same initial conditions were measured by a dual-slit velocimeter. Exit plume particle velocities as determined by the three methods compared reasonably well, and differences could be attributed to frictional and particle distribution effects.
Scaling Up Decision Theoretic Planning to Planetary Rover Problems
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Dearden, Richard; Washington, Rich
2004-01-01
Because of communication limits, planetary rovers must operate autonomously during consequent durations. The ability to plan under uncertainty is one of the main components of autonomy. Previous approaches to planning under uncertainty in NASA applications are not able to address the challenges of future missions, because of several apparent limits. On another side, decision theory provides a solid principle framework for reasoning about uncertainty and rewards. Unfortunately, there are several obstacles to a direct application of decision-theoretic techniques to the rover domain. This paper focuses on the issues of structure and concurrency, and continuous state variables. We describes two techniques currently under development that address specifically these issues and allow scaling-up decision theoretic solution techniques to planetary rover planning problems involving a small number of goals.
Framing curriculum discursively: theoretical perspectives on the experience of VCE physics
NASA Astrophysics Data System (ADS)
Hart, Christina
2002-10-01
The process of developing prescribed curricula has been subject to little empirical investigation, and there have been few attempts to develop theoretical frameworks for understanding the shape and content of particular subjects. This paper presents an account of the author's experience of developing a new course for school physics in the State of Victoria, Australia, at the end of the 1980s. The course was to represent a significant departure from traditional physics courses, and was intended to broaden participation and improve the quality of student learning. In the event the new course turned out to be very similar to traditional courses in Physics. The paper explores the reasons for this outcome. Some powerful discursive mechanisms are identified and some implications of post-structuralism for the theoretical understanding of curriculum are discussed.
Building Protection Against External Ionizing Fallout Radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Michael B.; Homann, Steven G.
A nuclear explosion has the potential to injure or kill tens to hundreds of thousands of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing external radiation exposures) by placing material and distance between fallout particles and indoor individuals. This protection is not well captured in current fallout risk assessment models and so the US Department of Defense is implementing the Regional Shelter Analysis methodology to improve the ability of the Hazard Prediction and Assessment Capability (HPAC) model to account for building protection. This report supports the HPAC improvement effort by identifying a setmore » of building attributes (next page) that, when collectively specified, are sufficient to calculate reasonably accurate, i.e., within a factor of 2, fallout shelter quality estimates for many individual buildings. The set of building attributes were determined by first identifying the key physics controlling building protection from fallout radiation and then assessing which building attributes are relevant to the identified physics. This approach was evaluated by developing a screening model (PFscreen) based on the identified physics and comparing the screening model results against the set of existing independent experimental, theoretical, and modeled building protection estimates. In the interests of transparency, we have developed a benchmark dataset containing (a) most of the relevant primary experimental data published by prior generations of fallout protection scientists as well as (b) the screening model results.« less
Bondi Accretion and the Problem of the Missing Isolated Neutron Stars
NASA Technical Reports Server (NTRS)
Perna, Rosalba; Narayan, Ramesh; Rybicki, George; Stella, Luigi; Treves, Aldo
2003-01-01
A large number of neutron stars (NSs), approximately 10(exp 9), populate the Galaxy, but only a tiny fraction of them is observable during the short radio pulsar lifetime. The majority of these isolated NSs, too cold to be detectable by their own thermal emission, should be visible in X-rays as a result of accretion from the interstellar medium. The ROSAT All-Sky Survey has, however, shown that such accreting isolated NSs are very elusive: only a few tentative candidates have been identified, contrary to theoretical predictions that up to several thousand should be seen. We suggest that the fundamental reason for this discrepancy lies in the use of the standard Bondi formula to estimate the accretion rates. We compute the expected source counts using updated estimates of the pulsar velocity distribution, realistic hydrogen atmosphere spectra, and a modified expression for the Bondi accretion rate, as suggested by recent MHD simulations and supported by direct observations in the case of accretion around supermassive black holes in nearby galaxies and in our own. We find that, whereas the inclusion of atmospheric spectra partly compensates for the reduction in the counts due to the higher mean velocities of the new distribution, the modified Bondi formula dramatically suppresses the source counts. The new predictions are consistent with a null detection at the ROSAT sensitivity.
Mathematical estimation of melt depth in conduction mode of laser spot remelting process
NASA Astrophysics Data System (ADS)
Hadi, Iraj
2012-12-01
A one-dimensional mathematical model based on the front tracking method was developed to predict the melt depth as a function of internal and external parameters of laser spot remelting process in conduction mode. Power density, pulse duration, and thermophysical properties of material including thermal diffusivity, melting point, latent heat, and absorption coefficient have been taken into account in the model of this article. By comparing the theoretical results and experimental welding data of commercial pure nickel and titanium plates, the validity of the developed model was examined. Comparison shows a reasonably good agreement between the theory and experiment. For the sake of simplicity, a graphical technique was presented to obtain the melt depth of various materials at any arbitrary amount of power density and pulse duration. In the graphical technique, two dimensionless constants including the Stefan number (Ste) and an introduced constant named laser power factor (LPF) are used. Indeed, all of the internal and external parameters have been gathered in LPF. The effect of power density and pulse duration on the variation of melt depth for different materials such as aluminum, copper, and stainless steel were investigated. Additionally, appropriate expressions were extracted to describe the minimum power density and time to reach melting point in terms of process parameters. A simple expression is also extracted to estimate the thickness of mushy zone for alloys.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Assessing Alternative Endpoints for Groundwater Remediation at Contaminated Sites
2011-05-01
HRC), SVE, in-well aeration, phytoremediation , excavation, and pump-and-treat) (Appendix A, sites 2, 7, 21, 42, 43, 48, 55, 69, 72, and 77). Three... phytoremediation 2001 FS, TI evaluation, and ROD Reason(s) for TI Approval: Primary reasons: DNAPL is present in the surficial aquifer...given Cost estimate: Not given Final remedy: Free-phase DNAPL recovery in a localized area, continued phytoremediation , monitored biodegradation
Safety belt promotion: theory and practice.
Nelson, G D; Moffit, P B
1988-02-01
The purpose of this paper is to provide practitioners a rationale and description of selected theoretically based approaches to safety belt promotion. Theory failure is a threat to the integrity and effectiveness of safety belt promotion. The absence of theory driven programs designed to promote safety belt use is a concern of this paper. Six theoretical models from the social and behavioral sciences are reviewed with suggestions for application to promoting safety belt use and include Theory of Reasoned Action, the Health Belief Model, Fear Arousal, Operant Learning, Social Learning Theory, and Diffusion of Innovations. Guidelines for the selection and utilization of theory are discussed.
Ostrovsky, Lev A; Sutin, Alexander M; Soustova, Irina A; Matveyev, Alexander L; Potapov, Andrey I; Kluzek, Zigmund
2003-02-01
The paper describes nonlinear effects due to a biharmonic acoustic signal scattering from air bubbles in the sea. The results of field experiments in a shallow sea are presented. Two waves radiated at frequencies 30 and 31-37 kHz generated backscattered signals at sum and difference frequencies in a bubble layer. A motorboat propeller was used to generate bubbles with different concentrations at different times, up to the return to the natural subsurface layer. Theoretical consideration is given for these effects. The experimental data are in a reasonably good agreement with theoretical predictions.
Midwives׳ clinical reasoning during second stage labour: Report on an interpretive study.
Jefford, Elaine; Fahy, Kathleen
2015-05-01
clinical reasoning was once thought to be the exclusive domain of medicine - setting it apart from 'non-scientific' occupations like midwifery. Poor assessment, clinical reasoning and decision-making skills are well known contributors to adverse outcomes in maternity care. Midwifery decision-making models share a common deficit: they are insufficiently detailed to guide reasoning processes for midwives in practice. For these reasons we wanted to explore if midwives actively engaged in clinical reasoning processes within their clinical practice and if so to what extent. The study was conducted using post structural, feminist methodology. to what extent do midwives engage in clinical reasoning processes when making decisions in the second stage labour? twenty-six practising midwives were interviewed. Feminist interpretive analysis was conducted by two researchers guided by the steps of a model of clinical reasoning process. Six narratives were excluded from analysis because they did not sufficiently address the research question. The midwives narratives were prepared via data reduction. A theoretically informed analysis and interpretation was conducted. using a feminist, interpretive approach we created a model of midwifery clinical reasoning grounded in the literature and consistent with the data. Thirteen of the 20 participant narratives demonstrate analytical clinical reasoning abilities but only nine completed the process and implemented the decision. Seven midwives used non-analytical decision-making without adequately checking against assessment data. over half of the participants demonstrated the ability to use clinical reasoning skills. Less than half of the midwives demonstrated clinical reasoning as their way of making decisions. The new model of Midwifery Clinical Reasoning includes 'intuition' as a valued way of knowing. Using intuition, however, should not replace clinical reasoning which promotes through decision-making can be made transparent and be consensually validated. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lowrie, Tom; Jorgensen, Robyn
2018-03-01
Since the early 70s, there has been recognition that there are specific differences in achievement based on variables, such as gender and socio-economic background, in terms of mathematics performance. However, these differences are not unilateral but rather quite specific and relate strongly to spatial reasoning. This early work has paved the way for thinking critically about who achieves in mathematics and why. This project innovatively combines the strengths of the two Chief Investigators—Lowrie's work in spatial reasoning and Jorgensen's work in equity. The assumptions, the approach and theoretical framing used in the study unite quite disparate areas of mathematics education into a cogent research program that seeks to challenge some of the long-held views in the field of mathematics education.
The effects of group stereotypes on adolescents' reasoning about peer retribution.
Pitner, Ronald O; Astor, Ron Avi; Benbenishty, Rami; Haj-Yahia, Muhammad M; Zeira, Anat
2003-01-01
This study examined the effects of negative group stereotypes on adolescents' reasoning about peer retribution. The sample of adolescents was drawn from central and northern Israel and consisted of 2,604 Arab and Jewish students (ages 13-17; grades 7-11). A quasi-experimental, between-subject design was used, in which the students in each grade were assigned randomly to 1 of 4 peer retribution scenarios. The findings provide evidence that Arab and Jewish students have stereotypes about one another and that in-group bias affected their approval and reasoning about peer retribution only in specific situations. This inquiry provides evidence that it was the number of justifications endorsed within a specific domain that distinguished Arab and Jewish respondents. Theoretical and practical implications are discussed.
Playing spades: The rich resources of African American young men
NASA Astrophysics Data System (ADS)
Schademan, Alfred R.
Research has shown that African American young men as a demographic group occupy the lowest levels of academic performance in both science and mathematics. In spite of this educational problem, little research has been conducted on the knowledge related to these disciplines that these young men learn and develop through everyday cultural practices. Such knowledge is needed in order to: (1) combat the deficit views that many teachers currently hold of African American young men, and (2) inform teachers interested in implementing pedagogies in their classrooms that draw upon the knowledge of African American young men. To add to our knowledge in this field, this study examines the resources that African American young men learn, use, and develop through a card game called Spades. Specifically, the study identifies and analyzes the models and model-based reasoning that the players use in order to win games. The study focuses upon modeling as it is central to both science and mathematics. To imbed player models and reasoning in context, the study employs a syncretic theoretical framework that examines how Spades has changed over time and how it is currently played in a high school setting. The qualitative study uses ethnographic methods combined with play-by-play analyses to reconstruct games and examine player strategies and reasoning that guide their decisions. The study found that the players operate from a number of different models while playing the game. Specifically, the players consider multiple variables and factors, as well as their mathematical relationships, to predict future occurrences and then play cards accordingly. Further, the players use a number of resources to win games including changing the game to maintain a competitive edge, counting cards, selectively memorizing cards played, assessing risk, bluffing, reading partners as well as opponents, reneging, estimating probabilities, and predicting outcomes. The player models and resources bear striking resemblance to what scientists and mathematicians do when modeling. Lastly, the study identifies eight features of Spades that make it a rich context for the learning and development of significant forms of reasoning. Most importantly, Spades is an empowering context through which the players both learn and display their resources and abilities in order to deal with complex situations. Consequently, the study provides evidence that many African American young men routinely employ types of reasoning in everyday practices that are robust and relevant to science and mathematics.
Varadarajan, Divya; Haldar, Justin P
2017-11-01
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Integration in psychotherapy: Reasons and challenges.
Fernández-Álvarez, Héctor; Consoli, Andrés J; Gómez, Beatriz
2016-11-01
Although integration has been formally influencing the field of psychotherapy since the 1930s, its impact gained significant momentum during the 1980s. Practical, theoretical, and scientific reasons help to explain the growing influence of integration in psychotherapy. The field of psychotherapy is characterized by many challenges which integration may change into meaningful opportunities. Nonetheless, many obstacles remain when seeking to advance integration. To appreciate the strength of integration in psychotherapy we describe an integrative, comprehensive approach to service delivery, research, and training. We then discuss the role of integration in the future of psychotherapy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pitner, Ronald O; Astor, Ron Avi; Benbenishty, Rami; Haj-Yahia, Muhammad M; Zeira, Anat
2011-05-01
In this study, we examined what contextual factors influence adolescents' judgments and reasoning about spousal retribution. Adolescents were drawn from Central and Northern Israel and consisted of 2,324 Arab and Jewish students (Grades 7-11). The study was set up in a 2 (Arab/Jewish respondent) × 2 (spousal retribution scenarios) factorial design. Our findings suggest that societal and cultural norms may be more powerful contextual variables than group stereotypes in influencing Arab and Jewish adolescents' evaluations of spousal retribution. Theoretical and practical implications are discussed.
Revisiting Organisational Learning in Integrated Care.
Nuño-Solinís, Roberto
2017-08-11
Progress in health care integration is largely linked to changes in processes and ways of doing. These changes have knowledge management and learning implications. For this reason, the use of the concept of organisational learning is explored in the field of integrated care. There are very limited contributions that have connected the fields of organisational learning and care integration in a systematic way, both at the theoretical and empirical level. For this reason, hybridization of both perspectives still provides opportunities for understanding care integration initiatives from a research perspective as well as potential applications in health care management and planning.
Revisiting Organisational Learning in Integrated Care
2017-01-01
Progress in health care integration is largely linked to changes in processes and ways of doing. These changes have knowledge management and learning implications. For this reason, the use of the concept of organisational learning is explored in the field of integrated care. There are very limited contributions that have connected the fields of organisational learning and care integration in a systematic way, both at the theoretical and empirical level. For this reason, hybridization of both perspectives still provides opportunities for understanding care integration initiatives from a research perspective as well as potential applications in health care management and planning. PMID:28970762
NASA Technical Reports Server (NTRS)
Goshchitskii, B. N.; Davydov, S. A.; Karkin, A. E.; Mirmelstein, A. V.; Sadovskii, M. V.
1990-01-01
Theoretical interpretation of recent experiments on radiationally disordered high-temperature superconductors is presented, based on the concepts of mutual interplay of Anderson localization and superconductivity. Microscopic derivation of Ginzburg-Landau coefficients for the quasi-two-dimensional system in the vicinity of localization transition is given in the framework of the self-consistent theory of localization. The 'minimal metallic conductivity' for the quasi-two-dimensional case is enhanced due to a small overlap of electronic states on the nearest neighbor conducting planes. This leads to a stronger influence of localization effects than in ordinary (three-dimensional) superconductors. From this point of view even the initial samples of high-temperature superconductors are already very close to Anderson transition. Anomalies of H(c2) are also analyzed, explaining the upward curvature of H(c2)(T) and apparent independence of dH(c2)/dT (T = T(sub c)) on the degree of disorder as due to localization effects. Researchers discuss the possible reasons of fast T(sub c) degradation due to the enhanced Coulomb effects caused by the disorder induced decrease of localization length. The appearance and growth of localized magnetic moments is also discussed. The disorder dependence of localization length calculated from the experimental data on conductivity correlates reasonably with the theoretical criterion for suppression of superconductivity in the system with localized electronic states.
Nuclear half-lives for {alpha}-radioactivity of elements with 100 {<=} Z {<=} 130
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, P. Roy; Samanta, C.; Physics Department, Gottwald Science Center, University of Richmond, Richmond, VA 23173
2008-11-15
Theoretical estimates for the half-lives of about 1700 isotopes of heavy elements with 100 {<=} Z {<=} 130 are tabulated using theoretical Q-values. The quantum mechanical tunneling probabilities are calculated within a WKB framework using microscopic nuclear potentials. The microscopic nucleus-nucleus potentials are obtained by folding the densities of interacting nuclei with a density-dependent M3Y effective nucleon-nucleon interaction. The {alpha}-decay half-lives calculated in this formalism using the experimental Q-values were found to be in good agreement over a wide range of experimental data spanning about 20 orders of magnitude. The theoretical Q-values used for the present calculations are extracted frommore » three different mass estimates viz. Myers-Swiatecki, Muntian-Hofmann-Patyk-Sobiczewski, and Koura-Tachibana-Uno-Yamada.« less
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Strong Stackelberg reasoning in symmetric games: An experimental replication and extension
Colman, Andrew M.; Lawrence, Catherine L.
2014-01-01
In common interest games in which players are motivated to coordinate their strategies to achieve a jointly optimal outcome, orthodox game theory provides no general reason or justification for choosing the required strategies. In the simplest cases, where the optimal strategies are intuitively obvious, human decision makers generally coordinate without difficulty, but how they achieve this is poorly understood. Most theories seeking to explain strategic coordination have limited applicability, or require changes to the game specification, or introduce implausible assumptions or radical departures from fundamental game-theoretic assumptions. The theory of strong Stackelberg reasoning, according to which players choose strategies that would maximize their own payoffs if their co-players could invariably anticipate any strategy and respond with a best reply to it, avoids these problems and explains strategic coordination in all dyadic common interest games. Previous experimental evidence has provided evidence for strong Stackelberg reasoning in asymmetric games. Here we report evidence from two experiments consistent with players being influenced by strong Stackelberg reasoning in a wide variety of symmetric 3 × 3 games but tending to revert to other choice criteria when strong Stackelberg reasoning generates small payoffs. PMID:24688846
Strong Stackelberg reasoning in symmetric games: An experimental replication and extension.
Pulford, Briony D; Colman, Andrew M; Lawrence, Catherine L
2014-01-01
In common interest games in which players are motivated to coordinate their strategies to achieve a jointly optimal outcome, orthodox game theory provides no general reason or justification for choosing the required strategies. In the simplest cases, where the optimal strategies are intuitively obvious, human decision makers generally coordinate without difficulty, but how they achieve this is poorly understood. Most theories seeking to explain strategic coordination have limited applicability, or require changes to the game specification, or introduce implausible assumptions or radical departures from fundamental game-theoretic assumptions. The theory of strong Stackelberg reasoning, according to which players choose strategies that would maximize their own payoffs if their co-players could invariably anticipate any strategy and respond with a best reply to it, avoids these problems and explains strategic coordination in all dyadic common interest games. Previous experimental evidence has provided evidence for strong Stackelberg reasoning in asymmetric games. Here we report evidence from two experiments consistent with players being influenced by strong Stackelberg reasoning in a wide variety of symmetric 3 × 3 games but tending to revert to other choice criteria when strong Stackelberg reasoning generates small payoffs.
Safety model assessment and two-lane urban crash model
DOT National Transportation Integrated Search
2008-10-01
There are many reasons to be concerned with estimating the frequency and social costs of highway accidents, but most reasons are motivated by a desire to minimize these costs to the extent feasible. Competition for scarce resources is a practical nec...
Constructionism and the space of reasons
NASA Astrophysics Data System (ADS)
Mackrell, Kate; Pratt, Dave
2017-12-01
Constructionism, best known as the framework for action underpinning Seymour Papert's work with Logo, has stressed the importance of engaging students in creating their own products. Noss and Hoyles have argued that such activity enables students to participate increasingly in a web of connections to further their activity. Ainley and Pratt have elaborated that learning is best facilitated when the student is engaged in a purposeful activity that leads to appreciation of the power of mathematical ideas. Constructionism gives prominence to how the learner's logical reasoning and emotion-driven reasons for engagement are inseparable. We argue that the dependence of constructionism upon the orienting framework of constructivism fails to provide sufficient theoretical underpinning for these ideas. We therefore propose an alternative orienting framework, in which learning takes place through initiation into the space of reasons, such that a person's thoughts, actions and feelings are increasingly open to critique and justification. We argue that knowing as responsiveness to reasons encompasses not only the powerful ideas of mathematics and disciplinary knowledge of modes of enquiry but also the extralogical, such as in feelings of the aesthetic, control, excitement, elegance and efficiency. We discuss the implication that mathematics educators deeply consider the learner's reasons for purposeful activity and design settings in which these reasons can be made public and open to critique.
A review of research on formal reasoning and science teaching
NASA Astrophysics Data System (ADS)
Lawson, Anton E.
A central purpose of education is to improve students' reasoning abilities. The present review examines research in developmental psychology and science education that has attempted to assess the validity of Piaget's theory of formal thought and its relation to educational practice. Should a central objective of schools be to help students become formal thinkers? To answer this question research has focused on the following subordinate questions: (1) What role does biological maturation play in the development of formal reasoning? (2) Are Piaget's formal tasks reliable and valid? (3) Does formal reasoning constitute a unified and general mode of intellectual functioning? (4) How does the presence or absence of formal reasoning affect school achievement? (5) Can formal reasoning be taught? (6) What is the structural or functional nature of advanced reasoning? The general conclusion drawn is that although Piaget's work and that which has sprung from it leaves a number of unresolved theoretical and methodological problems, it provides an important background from which to make substantial progress toward a most significant educational objective.All our dignity lies in thought. By thought we must elevate ourselves, not by space and time which we can not fill. Let us endeavor then to think well; therein lies the principle of morality. Blaise Pascal 1623-1662.
Emotional see-saw affects rationality of decision-making: Evidence for metacognitive impairments.
Folwarczny, Michał; Kaczmarek, Magdalena C; Doliński, Dariusz; Szczepanowski, Remigiusz
2018-05-01
This research investigated the cognitive mechanisms that underlie impairments in human reasoning triggered by the emotional see-saw technique. It has previously been stated that such manipulation is effective as it presumably induces a mindless state and cognitive deficits in compliant individuals. Based on the dual-system architecture of reasoning (system 2) and affective decision-making (system 1), we challenged the previous theoretical account by indicating that the main source of compliance is impairment of the meta-reasoning system when rapid affective changes occur. To examine this hypothesis, we manipulated affective feelings (system 1 processing) by violating participants' expectations regarding reward and performance in a go/no-go task in which individuals were to inhibit their responses to earn money. Aside from the go/no-go performance, we measured rationality (meta-reasoning system 2) in decision-making by asking participants to comply with a nonsensical request. We found that participants who were exposed to meta-reasoning impairments due to the emotional see-saw phenomenon exhibited mindless behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
Cohen, Emma; Burdett, Emily; Knight, Nicola; Barrett, Justin
2011-01-01
We report the results of a cross-cultural investigation of person-body reasoning in the United Kingdom and northern Brazilian Amazon (Marajó Island). The study provides evidence that directly bears upon divergent theoretical claims in cognitive psychology and anthropology, respectively, on the cognitive origins and cross-cultural incidence of mind-body dualism. In a novel reasoning task, we found that participants across the two sample populations parsed a wide range of capacities similarly in terms of the capacities' perceived anchoring to bodily function. Patterns of reasoning concerning the respective roles of physical and biological properties in sustaining various capacities did vary between sample populations, however. Further, the data challenge prior ad-hoc categorizations in the empirical literature on the developmental origins of and cognitive constraints on psycho-physical reasoning (e.g., in afterlife concepts). We suggest cross-culturally validated categories of "Body Dependent" and "Body Independent" items for future developmental and cross-cultural research in this emerging area. Copyright © 2011 Cognitive Science Society, Inc.
Model-Based Reasoning in Humans Becomes Automatic with Training.
Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J
2015-09-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
Stability of Local Quantum Dissipative Systems
NASA Astrophysics Data System (ADS)
Cubitt, Toby S.; Lucia, Angelo; Michalakis, Spyridon; Perez-Garcia, David
2015-08-01
Open quantum systems weakly coupled to the environment are modeled by completely positive, trace preserving semigroups of linear maps. The generators of such evolutions are called Lindbladians. In the setting of quantum many-body systems on a lattice it is natural to consider Lindbladians that decompose into a sum of local interactions with decreasing strength with respect to the size of their support. For both practical and theoretical reasons, it is crucial to estimate the impact that perturbations in the generating Lindbladian, arising as noise or errors, can have on the evolution. These local perturbations are potentially unbounded, but constrained to respect the underlying lattice structure. We show that even for polynomially decaying errors in the Lindbladian, local observables and correlation functions are stable if the unperturbed Lindbladian has a unique fixed point and a mixing time that scales logarithmically with the system size. The proof relies on Lieb-Robinson bounds, which describe a finite group velocity for propagation of information in local systems. As a main example, we prove that classical Glauber dynamics is stable under local perturbations, including perturbations in the transition rates, which may not preserve detailed balance.
NASA Astrophysics Data System (ADS)
Jiang, Tao; Wang, Xiaolong; Zhang, Li; Zhou, Jun; Zhao, Ziqi
2016-08-01
This study reported the improved Raman enhancement ability of silver nanoparticles (Ag NPs) decorated on surface mesoporous silica microspheres (MSiO2@Ag) than that of Ag NPs on solid silica microspheres (SSiO2@Ag). These two kinds of hybrid structures were prepared by a facile single-step hydrothermal reaction with polyvinylpyrrolidone (PVP) serves as both a reductant and stabilizer. The as-synthesized MSiO2@Ag microspheres show more significant surface-enhanced Raman scattering (SERS) activity for 4-mercaptobenzoic acid (4MBA) than SSiO2@Ag microspheres with enhancement factors as 9.20 × 106 and 4.39 × 106, respectively. The proposed reason for the higher SERS activity is estimated to be the contribution of more Raman probe molecules at the mesoporous channels where an enhanced electromagnetic field exists. Such a field was identified by theoretical calculation result. The MSiO2@Ag microspheres were eventually demonstrated for the SERS detection of a typical chemical toxin namely methyl parathion with a detection limit as low as 1 × 10-3 ppm, showing its promising potential in biosensor application.
Study on the Reduced Traffic Congestion Method Based on Dynamic Guidance Information
NASA Astrophysics Data System (ADS)
Li, Shu-Bin; Wang, Guang-Min; Wang, Tao; Ren, Hua-Ling; Zhang, Lin
2018-05-01
This paper studies how to generate the reasonable information of travelers’ decision in real network. This problem is very complex because the travelers’ decision is constrained by different human behavior. The network conditions can be predicted by using the advanced dynamic OD (Origin-Destination, OD) estimation techniques. Based on the improved mesoscopic traffic model, the predictable dynamic traffic guidance information can be obtained accurately. A consistency algorithm is designed to investigate the travelers’ decision by simulating the dynamic response to guidance information. The simulation results show that the proposed method can provide the best guidance information. Further, a case study is conducted to verify the theoretical results and to draw managerial insights into the potential of dynamic guidance strategy in improving traffic performance. Supported by National Natural Science Foundation of China under Grant Nos. 71471104, 71771019, 71571109, and 71471167; The University Science and Technology Program Funding Projects of Shandong Province under Grant No. J17KA211; The Project of Public Security Department of Shandong Province under Grant No. GATHT2015-236; The Major Social and Livelihood Special Project of Jinan under Grant No. 20150905
On the origin independence of the Verdet tensor†
NASA Astrophysics Data System (ADS)
Caputo, M. C.; Coriani, S.; Pelloni, S.; Lazzeretti, P.
2013-07-01
The condition for invariance under a translation of the coordinate system of the Verdet tensor and the Verdet constant, calculated via quantum chemical methods using gaugeless basis sets, is expressed by a vanishing sum rule involving a third-rank polar tensor. The sum rule is, in principle, satisfied only in the ideal case of optimal variational electronic wavefunctions. In general, it is not fulfilled in non-variational calculations and variational calculations allowing for the algebraic approximation, but it can be satisfied for reasons of molecular symmetry. Group-theoretical procedures have been used to determine (i) the total number of non-vanishing components and (ii) the unique components of both the polar tensor appearing in the sum rule and the axial Verdet tensor, for a series of symmetry groups. Test calculations at the random-phase approximation level of accuracy for water, hydrogen peroxide and ammonia molecules, using basis sets of increasing quality, show a smooth convergence to zero of the sum rule. Verdet tensor components calculated for the same molecules converge to limit values, estimated via large basis sets of gaugeless Gaussian functions and London orbitals.
Fluctuating bottleneck model studies on kinetics of DNA escape from α-hemolysin nanopores
NASA Astrophysics Data System (ADS)
Bian, Yukun; Wang, Zilin; Chen, Anpu; Zhao, Nanrong
2015-11-01
We have proposed a fluctuation bottleneck (FB) model to investigate the non-exponential kinetics of DNA escape from nanometer-scale pores. The basic idea is that the escape rate is proportional to the fluctuating cross-sectional area of DNA escape channel, the radius r of which undergoes a subdiffusion dynamics subjected to fractional Gaussian noise with power-law memory kernel. Such a FB model facilitates us to obtain the analytical result of the averaged survival probability as a function of time, which can be directly compared to experimental results. Particularly, we have applied our theory to address the escape kinetics of DNA through α-hemolysin nanopores. We find that our theoretical framework can reproduce the experimental results very well in the whole time range with quite reasonable estimation for the intrinsic parameters of the kinetics processes. We believe that FB model has caught some key features regarding the long time kinetics of DNA escape through a nanopore and it might provide a sound starting point to study much wider problems involving anomalous dynamics in confined fluctuating channels.
Bryantsev, Vyacheslav S.; Hay, Benjamin P.
2015-03-20
Selective extraction of minor actinides from lanthanides is a critical step in the reduction of radiotoxicity of spent nuclear fuels. However, the design of suitable ligands for separating chemically similar 4f- and 5f-block trivalent metal ions poses a significant challenge. Furthermore, first-principles calculations should play an important role in the design of new separation agents, but their ability to predict metal ion selectivity has not been systematically evaluated. We examine the ability of several density functional theory methods to predict selectivity of Am(III) and Eu(III) with oxygen, mixed oxygen–nitrogen, and sulfur donor ligands. The results establish a computational method capablemore » of predicting the correct order of selectivities obtained from liquid–liquid extraction and aqueous phase complexation studies. To allow reasonably accurate predictions, it was critical to employ sufficiently flexible basis sets and provide proper account of solvation effects. The approach is utilized to estimate the selectivity of novel amide-functionalized diazine and 1,2,3-triazole ligands.« less
Search for very high energy γ radiation from the radio bright region DR4 of the SNR G78.2+2.1.
NASA Astrophysics Data System (ADS)
Prosch, C.; Feigl, E.; Plaga, R.; Arqueros, F.; Cortina, J.; Fernandez, J.; Fernandez, P.; Fonseca, V.; Funk, B.; Gonzalez, J. C.; Haustein, V.; Heinzelmann, G.; Karle, A.; Krawczynski, H.; Krennrich, F.; Kuehn, M.; Lindner, A.; Lorenz, E.; Magnussen, N.; Martinez, S.; Matheis, V.; Merck, M.; Meyer, H.; Mirzoyan, R.; Moeller, H.; Moralejo, A.; Mueller, N.; Padilla, L.; Prahl, J.; Rhode, W.; Samorski, M.; Sanchez, J. A.; Sander, H.; Schmele, D.; Stamm, W.; Wahl, H.; Westerhoff, S.; Wiebel-Sooth, B.; Willmer, M.
1996-10-01
Data from the HEGRA air shower array are used to set an upper limit on the emission of γ-radiation above 25(18)TeV from the direction of the radio bright region DR4 within the SNR G78.2+2.1 of 2.5(7.1)x10^-13^cm^-2^/s. The shock front of SNR G78.2+2.1 probably recently overtook the molecular cloud Cong 8 which then acts as a target for the cosmic rays produced within the SNR, thus leading to the expectation of enhanced γ-radiation. Using a model of Drury, Aharonian and Voelk which assumes that SNRs are the sources of galactic cosmic rays via first order Fermi acceleration, we calculated a theoretical prediction for the γ-ray flux from the DR4 region and compared it with our experimental flux limit. Our `best estimate' value for the predicted flux lies a factor of about 18 above the upper limit for γ-ray energies above 25TeV. Possible reasons for this discrepancy are discussed.
Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Bloodgood, Dale V.
1998-01-01
This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.
Marson, D C; Cody, H A; Ingram, K K; Harrell, L E
1995-10-01
To identify neuropsychologic predictors of competency performance and status in Alzheimer's disease (AD) using a specific legal standard (LS). This study is a follow-up to the competency assessment research reported in this issue of the archives. Univariate and multivariate analyses of independent neuropsychologic test measures with a dependent measure of competency to consent to treatment. University medical center. Fifteen normal older control subjects and 29 patients with probable AD. Subjects were administered a battery of neuropsychologic measures theoretically linked to competency function, as well as two clinical vignettes testing their capacity to consent to medical treatment under five different LSs. The present study focused on one specific LS: the capacity to provide "rational reasons" for a treatment choice (LS4). Neuropsychologic test scores were correlated with scores on LS4 for the normal control group and the AD group. The resulting univariate predictors were then analyzed using stepwise regression and discriminant function to identify the key multivariate predictors of competency performance and status under LS4. Measures of word fluency predicted the LS4 scores of controls (R2 = .33) and the AD group (R2 = .36). A word fluency measure also emerged as the best single predictor of competency status for the full subject sample (n = 44), correctly classifying 82% of cases. Dementia severity (Mini-Mental State Examination score) did not emerge as a multivariate predictor of competency performance or status. Interestingly, measures of verbal reasoning and memory were not strongly associated with LS4. Word fluency measures predicted the normative performance and intact competency status of older control subjects and the declining performance and compromised competency status of patients with AD on a "rational reasons" standard of competency to consent to treatment. Cognitive capacities related to frontal lobe function appear to underlie the capacity to formulate rational reasons for a treatment choice. Neuropsychologic studies of competency function have important theoretical and clinical value.
Koivisto, Jaana-Maija; Multisilta, Jari; Niemi, Hannele; Katajisto, Jouko; Eriksson, Elina
2016-10-01
Clinical reasoning is viewed as a problem-solving activity; in games, players solve problems. To provide excellent patient care, nursing students must gain competence in clinical reasoning. Utilising gaming elements and virtual simulations may enhance learning of clinical reasoning. To investigate nursing students' experiences of learning clinical reasoning process by playing a 3D simulation game. Cross-sectional descriptive study. Thirteen gaming sessions at two universities of applied sciences in Finland. The prototype of the simulation game used in this study was single-player in format. The game mechanics were built around the clinical reasoning process. Nursing students from the surgical nursing course of autumn 2014 (N=166). Data were collected by means of an online questionnaire. In terms of the clinical reasoning process, students learned how to take action and collect information but were less successful in learning to establish goals for patient care or to evaluate the effectiveness of interventions. Learning of the different phases of clinical reasoning process was strongly positively correlated. The students described that they learned mainly to apply theoretical knowledge while playing. The results show that those who played digital games daily or occasionally felt that they learned clinical reasoning by playing the game more than those who did not play at all. Nursing students' experiences of learning the clinical reasoning process by playing a 3D simulation game showed that such games can be used successfully for learning. To ensure that students follow a systematic approach, the game mechanics need to be built around the clinical reasoning process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Denison, Stephanie; Trikutam, Pallavi; Xu, Fei
2014-08-01
A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar contexts with different outcomes. Can infants rapidly acquire probabilistic physical knowledge, such as some leaves fall and some glasses break by simply observing the statistical regularity with which objects behave and apply that knowledge in subsequent reasoning? We taught 11-month-old infants physical constraints on objects and asked them to reason about the probability of different outcomes when objects were drawn from a large distribution. Infants could have reasoned either by using the perceptual similarity between the samples and larger distributions or by applying physical rules to adjust base rates and estimate the probabilities. Infants learned the physical constraints quickly and used them to estimate probabilities, rather than relying on similarity, a version of the representativeness heuristic. These results indicate that infants can rapidly and flexibly acquire physical knowledge about objects following very brief exposure and apply it in subsequent reasoning. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Theoretical nuclear database for high-energy, heavy-ion (HZE) transport
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Wilson, J. W.
1995-01-01
Theoretical methods for estimating high-energy, heavy-ion (HZE) particle absorption and fragmentation cross-sections are described and compared with available experimental data. Differences between theory and experiment range from several percent for absorption cross-sections up to about 25%-50% for fragmentation cross-sections.
NASA Technical Reports Server (NTRS)
Wood, R. M.; Miller, D. S.; Brentner, K. S.
1983-01-01
A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.
Case-based medical informatics
Pantazi, Stefan V; Arocha, José F; Moehr, Jochen R
2004-01-01
Background The "applied" nature distinguishes applied sciences from theoretical sciences. To emphasize this distinction, we begin with a general, meta-level overview of the scientific endeavor. We introduce the knowledge spectrum and four interconnected modalities of knowledge. In addition to the traditional differentiation between implicit and explicit knowledge we outline the concepts of general and individual knowledge. We connect general knowledge with the "frame problem," a fundamental issue of artificial intelligence, and individual knowledge with another important paradigm of artificial intelligence, case-based reasoning, a method of individual knowledge processing that aims at solving new problems based on the solutions to similar past problems. We outline the fundamental differences between Medical Informatics and theoretical sciences and propose that Medical Informatics research should advance individual knowledge processing (case-based reasoning) and that natural language processing research is an important step towards this goal that may have ethical implications for patient-centered health medicine. Discussion We focus on fundamental aspects of decision-making, which connect human expertise with individual knowledge processing. We continue with a knowledge spectrum perspective on biomedical knowledge and conclude that case-based reasoning is the paradigm that can advance towards personalized healthcare and that can enable the education of patients and providers. We center the discussion on formal methods of knowledge representation around the frame problem. We propose a context-dependent view on the notion of "meaning" and advocate the need for case-based reasoning research and natural language processing. In the context of memory based knowledge processing, pattern recognition, comparison and analogy-making, we conclude that while humans seem to naturally support the case-based reasoning paradigm (memory of past experiences of problem-solving and powerful case matching mechanisms), technical solutions are challenging. Finally, we discuss the major challenges for a technical solution: case record comprehensiveness, organization of information on similarity principles, development of pattern recognition and solving ethical issues. Summary Medical Informatics is an applied science that should be committed to advancing patient-centered medicine through individual knowledge processing. Case-based reasoning is the technical solution that enables a continuous individual knowledge processing and could be applied providing that challenges and ethical issues arising are addressed appropriately. PMID:15533257
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
48 CFR 13.106-3 - Award and documentation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Supporting the award decision if other than price-related factors were considered in selecting the supplier...) Comparison of the proposed price with prices found reasonable on previous purchases; (iii) Current price... being purchased; (vi) Comparison to an independent Government estimate; or (vii) Any other reasonable...
28 CFR 100.16 - Cost estimate submission.
Code of Federal Regulations, 2013 CFR
2013-07-01
... evaluation of the estimated costs. The FBI reserves the right to request additional cost data from carriers... if, as determined by the FBI, all cost data reasonably available to the carrier are either submitted... explain the estimating process are required by the FBI and the carrier refuses to provide necessary data...
28 CFR 100.16 - Cost estimate submission.
Code of Federal Regulations, 2011 CFR
2011-07-01
... evaluation of the estimated costs. The FBI reserves the right to request additional cost data from carriers... if, as determined by the FBI, all cost data reasonably available to the carrier are either submitted... explain the estimating process are required by the FBI and the carrier refuses to provide necessary data...
28 CFR 100.16 - Cost estimate submission.
Code of Federal Regulations, 2012 CFR
2012-07-01
... evaluation of the estimated costs. The FBI reserves the right to request additional cost data from carriers... if, as determined by the FBI, all cost data reasonably available to the carrier are either submitted... explain the estimating process are required by the FBI and the carrier refuses to provide necessary data...
28 CFR 100.16 - Cost estimate submission.
Code of Federal Regulations, 2014 CFR
2014-07-01
... evaluation of the estimated costs. The FBI reserves the right to request additional cost data from carriers... if, as determined by the FBI, all cost data reasonably available to the carrier are either submitted... explain the estimating process are required by the FBI and the carrier refuses to provide necessary data...
The assumption of equilibrium in models of migration.
Schachter, J; Althaus, P G
1993-02-01
In recent articles Evans (1990) and Harrigan and McGregor (1993) (hereafter HM) scrutinized the equilibrium model of migration presented in a 1989 paper by Schachter and Althaus. This model used standard microeconomics to analyze gross interregional migration flows based on the assumption that gross flows are in approximate equilibrium. HM criticized the model as theoretically untenable, while Evans summoned empirical as well as theoretical objections. HM claimed that equilibrium of gross migration flows could be ruled out on theoretical grounds. They argued that the absence of net migration requires that either all regions have equal populations or that unsustainable regional migration propensities must obtain. In fact some moves are inter- and other are intraregional. It does not follow, however, that the number of interregional migrants will be larger for the more populous region. Alternatively, a country could be divided into a large number of small regions that have equal populations. With uniform propensities to move, each of these analytical regions would experience in equilibrium zero net migration. Hence, the condition that net migration equal zero is entirely consistent with unequal distributions of population across regions. The criticisms of Evans were based both on flawed reasoning and on misinterpretation of the results of a number of econometric studies. His reasoning assumed that the existence of demand shifts as found by Goldfarb and Yezer (1987) and Topel (1986) invalidated the equilibrium model. The equilibrium never really obtains exactly, but economic modeling of migration properly begins with a simple equilibrium model of the system. A careful reading of the papers Evans cited in support of his position showed that in fact they affirmed rather than denied the appropriateness of equilibrium modeling. Zero net migration together with nonzero gross migration are not theoretically incompatible with regional heterogeneity of population, wages, or amenities.
An Analysis of Machine- and Human-Analytics in Classification.
Tam, Gary K L; Kothari, Vivek; Chen, Min
2017-01-01
In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.
ERIC Educational Resources Information Center
Bialik, Gadi; Kalfri, Adv. Yael; Livneh, Idit
2013-01-01
The theoretical grounds underlying this paper are the variety of governance perspectives, which represent different political and economic ideologies (Green, 2005; Manzer, 2003). The coexistence of these often clashing attitudes is one of the reasons for policy ambiguity and policy implementation gaps (Malen, 2006). It can also expose disputing…
General System Theory: Toward a Conceptual Framework for Science and Technology Education for All.
ERIC Educational Resources Information Center
Chen, David; Stroup, Walter
1993-01-01
Suggests using general system theory as a unifying theoretical framework for science and technology education for all. Five reasons are articulated: the multidisciplinary nature of systems theory, the ability to engage complexity, the capacity to describe system dynamics, the ability to represent the relationship between microlevel and…
Classical and Contemporary Approaches for Moral Development
ERIC Educational Resources Information Center
Cam, Zekeriya; Seydoogullari, Sedef; Cavdar, Duygu; Cok, Figen
2012-01-01
Most of the information in the moral development literature depends on Theories of Piaget and Kohlberg. The theoretical contribution by Gilligan and Turiel are not widely known and not much resource is available in Turkish. For this reason introducing and discussing the theories of Gilligan and Turiel and more comprehensive perspective for moral…