Sample records for approximate reasoning model

  1. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  2. Artificial neural networks and approximate reasoning for intelligent control in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.

  3. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  4. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  5. Bifurcations in models of a society of reasonable contrarians and conformists

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Rechtman, Raúl

    2015-10-01

    We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.

  6. Bifurcations in models of a society of reasonable contrarians and conformists.

    PubMed

    Bagnoli, Franco; Rechtman, Raúl

    2015-10-01

    We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.

  7. Stable same-sex friendships with higher achieving partners promote mathematical reasoning in lower achieving primary school children.

    PubMed

    DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik

    2015-11-01

    This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. © 2015 The British Psychological Society.

  8. Stable Same-Sex Friendships with Higher Achieving Partners Promote Mathematical Reasoning in Lower Achieving Primary School Children

    PubMed Central

    DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik

    2015-01-01

    This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901

  9. Approximate Model of Zone Sedimentation

    NASA Astrophysics Data System (ADS)

    Dzianik, František

    2011-12-01

    The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.

  10. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  11. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  12. Approximate reasoning-based learning and control for proximity operations and docking in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.

    1991-01-01

    A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.

  13. An experiment-based comparative study of fuzzy logic control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing

    1989-01-01

    An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.

  14. ``Glue" approximation for the pairing interaction in the Hubbard model with next nearest neighbor hopping

    NASA Astrophysics Data System (ADS)

    Khatami, Ehsan; Macridin, Alexandru; Jarrell, Mark

    2008-03-01

    Recently, several authors have employed the ``glue" approximation for the Cuprates in which the full pairing vertex is approximated by the spin susceptibility. We study this approximation using Quantum Monte Carlo Dynamical Cluster Approximation methods on a 2D Hubbard model. By considering a reasonable finite value for the next nearest neighbor hopping, we find that this ``glue" approximation, in the current form, does not capture the correct pairing symmetry. Here, d-wave is not the leading pairing symmetry while it is the dominant symmetry using the ``exact" QMC results. We argue that the sensitivity of this approximation to the band structure changes leads to this inconsistency and that this form of interaction may not be the appropriate description of the pairing mechanism in Cuprates. We suggest improvements to this approximation which help to capture the the essential features of the QMC data.

  15. Testing approximate theories of first-order phase transitions on the two-dimensional Potts model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, C.; Pandit, R.

    The two-dimensional, q-state (q > 4) Potts model is used as a testing ground for approximate theories of first-order phase transitions. In particular, the predictions of a theory analogous to the Ramakrishnan-Yussouff theory of freezing are compared with those of ordinary mean-field (Curie-Wiess) theory. It is found that the Curie-Weiss theory is a better approximation than the Ramakrishnan-Yussouff theory, even though the former neglects all fluctuations. It is shown that the Ramakrishnan-Yussouff theory overestimates the effects of fluctuations in this system. The reasons behind the failure of the Ramakrishnan-Yussouff approximation and the suitability of using the two-dimensional Potts model asmore » a testing ground for these theories are discussed.« less

  16. First-order shock acceleration in solar flares

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Ramaty, R.

    1985-01-01

    The first order Fermi shock acceleration model is compared with specific observations where electron, proton, and alpha particle spectra are available. In all events, it is found that a single shock with a compression ratio as inferred from the low energy proton spectra can reasonably produce the full proton, electron, and alpha particle spectra. The model predicts that the acceleration time to a given energy will be approximately equal for electrons and protons and, for reasonable solar parameters, can be less than 1 sec to 100 MeV.

  17. Can we use the equivalent sphere model to approximate organ doses in space radiation environments?

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei

    For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.

  18. Incorporation of varying types of temporal data in a neural network

    NASA Technical Reports Server (NTRS)

    Cohen, M. E.; Hudson, D. L.

    1992-01-01

    Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.

  19. Can the Equivalent Sphere Model Approximate Organ Doses in Space?

    NASA Technical Reports Server (NTRS)

    Lin, Zi-Wei

    2007-01-01

    For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.

  20. Invariant patterns in crystal lattices: Implications for protein folding algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.; ISTRAIL,SORIN

    2000-06-01

    Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less

  1. On the integration of reinforcement learning and approximate reasoning for control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.

  2. Coherent Anomaly Method Calculation on the Cluster Variation Method. II.

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  3. Coherent Anomaly Method Calculation on the Cluster Variation Method. II. Critical Exponents of Bond Percolation Model

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    1991-10-01

    The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  4. Multispecies lottery competition: a diffusion analysis

    USGS Publications Warehouse

    Hatfield, J.S.; Chesson, P.L.; Tuljapurkar, S.; Caswell, H.

    1997-01-01

    The lottery model is a stochastic competition model designed for space-limited communities of sedentary organisms. Examples of such communities include coral reef fishes, aquatic sessile organisms, and many plant communities. Explicit conditions for the coexistence of two species and the stationary distribution of the two-species model were determined previously using an approximation with a diffusion process. In this chapter, a diffusion approximation is presented for the multispecies model for communities of two or more species, and a stage-structured model is investigated. The stage-structured model would be more reasonable for communities of long-lived species such as trees in a forest in which recruitment and death rates depend on the age or stage of the individuals.

  5. Parallel implementation of approximate atomistic models of the AMOEBA polarizable model

    NASA Astrophysics Data System (ADS)

    Demerdash, Omar; Head-Gordon, Teresa

    2016-11-01

    In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.

  6. Angular momentum projection for a Nilsson mean-field plus pairing model

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Pan, Feng; Launey, Kristina D.; Luo, Yan-An; Draayer, J. P.

    2016-06-01

    The angular momentum projection for the axially deformed Nilsson mean-field plus a modified standard pairing (MSP) or the nearest-level pairing (NLP) model is proposed. Both the exact projection, in which all intrinsic states are taken into consideration, and the approximate projection, in which only intrinsic states with K = 0 are taken in the projection, are considered. The analysis shows that the approximate projection with only K = 0 intrinsic states seems reasonable, of which the configuration subspace considered is greatly reduced. As simple examples for the model application, low-lying spectra and electromagnetic properties of 18O and 18Ne are described by using both the exact and approximate angular momentum projection of the MSP or the NLP, while those of 20Ne and 24Mg are described by using the approximate angular momentum projection of the MSP or NLP.

  7. The best-fit universe. [cosmological models

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  8. SPARC GENERATED CHEMICAL PROPERTIES DATABASE FOR USE IN NATIONAL RISK ASSESSMENTS

    EPA Science Inventory

    The SPARC (Sparc Performs Automated Reasoning in Chemistry) Model was used to provide temperature dependent algorithms used to estimate chemical properties for approximately 200 chemicals of interest to the promulgation of the Hazardous Waste Identification Rule (HWIR) . Proper...

  9. Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Gregory M; Key, Brian P; Zerkle, David K

    2009-01-01

    The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which canmore » be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.« less

  10. Sound scattering by several zooplankton groups. II. Scattering models.

    PubMed

    Stanton, T K; Chu, D; Wiebe, P H

    1998-01-01

    Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon the acoustically inferred boundary conditions determined from laboratory backscattering data presented in part I of this series [Stanton et al., J. Acoust. Soc. Am. 103, 225-235 (1998)]. The models use a combination of ray theory, modal-series solution, and distorted wave Born approximation (DWBA). The formulations, which are inherently approximate, are designed to include only the dominant scattering mechanisms as determined from the experiments. The models for the fluidlike animals (euphausiids in this case) ranged from the simplest case involving two rays, which could qualitatively describe the structure of target strength versus frequency for single pings, to the most complex case involving a rough inhomogeneous asymmetrically tapered bent cylinder using the DWBA-based formulation which could predict echo levels over all angles of incidence (including the difficult region of end-on incidence). The model for the elastic shelled body (gastropods in this case) involved development of an analytical model which takes into account irregularities and discontinuities of the shell. The model for gas-bearing animals (siphonophores) is a hybrid model which is composed of the summation of the exact solution to the gas sphere and the approximate DWBA-based formulation for arbitrarily shaped fluidlike bodies. There is also a simplified ray-based model for the siphonophore. The models are applied to data involving single pings, ping-to-ping variability, and echoes averaged over many pings. There is reasonable qualitative agreement between the predictions and single ping data, and reasonable quantitative agreement between the predictions and variability and averages of echo data.

  11. Performance Monitoring of Diabetic Patient Systems

    DTIC Science & Technology

    2001-10-25

    a process delay that is due to the dynamics of the glucose sensor. A. Bergman Model The Bergman and AIDA models both utilize a \\minimal model...approxima- tion of the process must be made to achieve reasonable performance. A rst order approximation, ~g(s), of both the Bergman and AIDA models is...Within the IMC framework, both the Bergman and AIDA models can be controlled within acceptable toler- ances. The simulated faults are stochastic

  12. Why the Particle-in-a-Box Model Works Well for Cyanine Dyes but Not for Conjugated Polyenes

    ERIC Educational Resources Information Center

    Autschbach, Jochen

    2007-01-01

    We investigate why the particle-in-a-box (PB) model works well for calculating the absorption wavelengths of cyanine dyes and why it does not work for conjugated polyenes. The PB model is immensely useful in the classroom, but owing to its highly approximate character there is little reason to expect that it can yield quantitative agreement with…

  13. The microwave propagation and backscattering characteristics of vegetation. [wheat, sorghum, soybeans and corn fields in Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Wilson, E. A.

    1984-01-01

    A semi-empirical model for microwave backscatter from vegetation was developed and a complete set of canope attenuation measurements as a function of frequency, incidence angle and polarization was acquired. The semi-empirical model was tested on corn and sorghum data over the 8 to 35 GHz range. The model generally provided an excellent fit to the data as measured by the correlation and rms error between observed and predicted data. The model also predicted reasonable values of canopy attenuation. The attenuation data was acquired over the 1.6 to 10.2 GHz range for the linear polarizations at approximately 20 deg and 50 deg incidence angles for wheat and soybeans. An attenuation model is proposed which provides reasonable agreement with the measured data.

  14. Dose-Dependent Model of Caffeine Effects on Human Vigilance during Total Sleep Deprivation

    DTIC Science & Technology

    2014-05-20

    does not consider the absorption of caffeine . This is a reasonable approximation for caffeine when ingested via coffee , tea, energy drinks, and most...Dose-dependent model of caffeine effects on human vigilance during total sleep deprivation Sridhar Ramakrishnan a, Srinivas Laxminarayan a, Nancy J...We modeled the dose-dependent effects of caffeine on human vigilance. The model predicted the effects of both single and repeated caffeine doses

  15. Mean-field approximation for the Sznajd model in complex networks

    NASA Astrophysics Data System (ADS)

    Araújo, Maycon S.; Vannucchi, Fabio S.; Timpanaro, André M.; Prado, Carmen P. C.

    2015-02-01

    This paper studies the Sznajd model for opinion formation in a population connected through a general network. A master equation describing the time evolution of opinions is presented and solved in a mean-field approximation. Although quite simple, this approximation allows us to capture the most important features regarding the steady states of the model. When spontaneous opinion changes are included, a discontinuous transition from consensus to polarization can be found as the rate of spontaneous change is increased. In this case we show that a hybrid mean-field approach including interactions between second nearest neighbors is necessary to estimate correctly the critical point of the transition. The analytical prediction of the critical point is also compared with numerical simulations in a wide variety of networks, in particular Barabási-Albert networks, finding reasonable agreement despite the strong approximations involved. The same hybrid approach that made it possible to deal with second-order neighbors could just as well be adapted to treat other problems such as epidemic spreading or predator-prey systems.

  16. Unification of Gauge Couplings in the E{sub 6}SSM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athron, P.; King, S. F.; Luo, R.

    2010-02-10

    We argue that in the two--loop approximation gauge coupling unification in the exceptional supersymmetric standard model (E{sub 6}SSM) can be achieved for any phenomenologically reasonable value of alpha{sub 3}(M{sub Z}) consistent with the experimentally measured central value.

  17. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... different horsepowers without duplication; (C) The basic models should be of different frame number series... be produced over a reasonable period of time (approximately 180 days), then each unit shall be tested... design may be substituted without requiring additional testing if the represented measures of energy...

  18. Combining the modified Skyrme-like model and the local density approximation to determine the symmetry energy of nuclear matter

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ren, Zhongzhou; Xu, Chang

    2018-07-01

    Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.

  19. Approximate Model for Turbulent Stagnation Point Flow.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near themore » stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.« less

  20. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines

    NASA Technical Reports Server (NTRS)

    Klosterman, A. L.

    1984-01-01

    For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.

  2. A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.

    1989-08-01

    The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less

  3. Gas-phase geometry optimization of biological molecules as a reasonable alternative to a continuum environment description: fact, myth, or fiction?

    PubMed

    Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João

    2009-12-31

    Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.

  4. Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method

    NASA Astrophysics Data System (ADS)

    Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria

    2016-03-01

    The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.

  5. How Uncertain is Uncertainty?

    NASA Astrophysics Data System (ADS)

    Vámos, Tibor

    The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.

  6. Linear model analysis of the influencing factors of boar longevity in Southern China.

    PubMed

    Wang, Chao; Li, Jia-Lian; Wei, Hong-Kui; Zhou, Yuan-Fei; Jiang, Si-Wen; Peng, Jian

    2017-04-15

    This study aimed to investigate the factors influencing the boar herd life month (BHLM) in Southern China. A total of 1630 records of culling boars from nine artificial insemination centers were collected from January 2013 to May 2016. A logistic regression model and two linear models were used to analyze the effects of breed, housing type, age at herd entry, and seed stock herd on boar removal reason and BHLM, respectively. Boar breed and the age at herd entry had significant effects on the removal reasons (P < 0.001). Results of the two linear models (with or without removal reason including) showed boars raised individually in stalls exhibited shorter BHLM than those raised in pens (P < 0.001). Boars aged 5 and 6 months at herd entry (44.6%) showed shorter BHLM than those aged 8 and 9 months at herd entry (P < 0.05). Approximately 95% boars were culled for different reasons other than old age, and the BHLM of these boars was at least 12.3 months longer than that of boars culled for other reasons (P < 0.001). In conclusion, abnormal elimination in boars is serious and it had a negative effect on boar BHLM. Boar removal reason and BHLM can be affected by breed, housing type, and seed stock herd. Importantly, 8 months is suggested as the most suitable age for boar introduction. Copyright © 2017. Published by Elsevier Inc.

  7. A diffusion approximation for ocean wave scatterings by randomly distributed ice floes

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Shen, Hayley

    2016-11-01

    This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.

  8. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  9. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    PubMed

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  10. Preliminary characterization of a one-axis acoustic system. [acoustic levitation for space processing

    NASA Technical Reports Server (NTRS)

    Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.

    1979-01-01

    The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.

  11. Comment on “On the quantum theory of molecules” [J. Chem. Phys. 137, 22A544 (2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutcliffe, Brian T., E-mail: bsutclif@ulb.ac.be; Woolley, R. Guy

    2014-01-21

    In our previous paper [B. T. Sutcliffe and R. G. Woolley, J. Chem. Phys. 137, 22A544 (2012)] we argued that the Born-Oppenheimer approximation could not be based on an exact transformation of the molecular Schrödinger equation. In this Comment we suggest that the fundamental reason for the approximate nature of the Born-Oppenheimer model is the lack of a complete set of functions for the electronic space, and the need to describe the continuous spectrum using spectral projection.

  12. Reasoning with Vectors: A Continuous Model for Fast Robust Inference.

    PubMed

    Widdows, Dominic; Cohen, Trevor

    2015-10-01

    This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.

  13. Reasoning with Vectors: A Continuous Model for Fast Robust Inference

    PubMed Central

    Widdows, Dominic; Cohen, Trevor

    2015-01-01

    This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967

  14. A model for the formation of the Local Group

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Melott, A. L.; Holmes, M. R.; Jiang, L. R.

    1989-01-01

    Observational tests of a model for the formation of the Local Group are presented and analyzed in which the mass concentration grows by gravitational accretion of local-pressure matter onto two seed masses in an otherwise homogeneous initial mass distribution. The evolution of the mass distribution is studied in an analytic approximation and a numerical computation. The initial seed mass and separation are adjusted to produce the observed present separation and relative velocity of the Andromeda Nebula and the Galaxy. If H(0) is adjusted to about 80 km/s/Mpc with density parameter Omega = 1, then the model gives a good fit to the motions of the outer members of the Local Group. The same model gives particle orbits at radius of about 100 kpc that reasonably approximate the observed distribution of redshifts of the Galactic satellites.

  15. Prediction of destabilizing blade tip forces for shrouded and unshrouded turbines

    NASA Technical Reports Server (NTRS)

    Qiu, Y. J.; Martinezsanchez, M.

    1985-01-01

    The effect of a nonuniform flow field on the Alford force calculation is investigated. The ideas used here are based on those developed by Horlock and Greitzer. It is shown that the nonuniformity of the flow field does contribute to the Alford force calculation. An attempt is also made to include the effect of whirl speed. The values predicted by the model are compared with those obtained experimentally by Urlicks and Wohlrab. The possibility of using existing turbine tip loss correlations to predict beta is also exploited. The nonuniform flow field induced by the tip clearnance variation tends to increase the resultant destabilizing force over and above what would be predicted on the basis of the local variation of efficiency. On the one hand, the pressure force due to the nonuniform inlet and exit pressure also plays a part even for unshrouded blades, and this counteracts the flow field effects, so that the simple Alford prediction remains a reasonable approximation. Once the efficiency variation with clearance is known, the presented model gives a slightly overpredicted, but reasonably accurate destabilizing force. In the absence of efficiency vs. clearance data, an empirical tip loss coefficient can be used to give a reasonable prediction of destabilizing force. To a first approximation, the whirl does have a damping effect, but only of small magnitude, and thus it can be ignored for some purposes.

  16. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  17. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  18. Discovering relevance knowledge in data: a growing cell structures approach.

    PubMed

    Azuaje, F; Dubitzky, W; Black, N; Adamson, K

    2000-01-01

    Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.

  19. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo

    2017-01-01

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241

  20. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo

    2017-06-07

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.

  1. Low Luminosity States of the Black Hole Candidate GX 339-4. 1; ASCA and Simultaneous Radio/RXTE Observations

    NASA Technical Reports Server (NTRS)

    Wilms, Joern; Nowak, Michael A.; Dove, James B.; Fender, Robert P.; DiMatteo, Tiziana

    1998-01-01

    We discuss a series of observations of the black hole candidate GX 339-4 in low luminosity, spectrally hard states. We present spectral analysis of three separate archival Advanced Satellite for Cosmology and Astrophysics (ASCA) data sets and eight separate Rossi X-ray Timing Explorer (RXTE) data sets. Three of the RXTE observations were strictly simultaneous with 843 Mega Hertz and 8.3-9.1 Giga Hertz radio observations. All of these observations have (3-9 keV) flux approximately less than 10(exp-9) ergs s(exp-1) CM(exp -2). The ASCA data show evidence for an approximately 6.4 keV Fe line with equivalent width approximately 40 eV, as well as evidence for a soft excess that is well-modeled by a power law plus a multicolor blackbody spectrum with peak temperature approximately equals 150-200 eV. The RXTE data sets also show evidence of an Fe line with equivalent widths approximately equal to 20-1OO eV. Reflection models show a hardening of the RXTE spectra with decreasing X-ray flux; however, these models do not exhibit evidence of a correlation between the photon index of the incident power law flux and the solid angle subtended by the reflector. 'Sphere+disk' Comptonization models and Advection Dominated Accretion Flow (ADAF) models also provide reasonable descriptions of the RXTE data. The former models yield coronal temperatures in the range 20-50 keV and optical depths of r approximately equal to 3. The model fits to the X-ray data, however, do not simultaneously explain the observed radio properties. The most likely source of the radio flux is synchrotron emission from an extended outflow of extent greater than O(10 (exp7) GM/c2).

  2. LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less

  3. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  4. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  5. A Bayesian Framework for False Belief Reasoning in Children: A Rational Integration of Theory-Theory and Simulation Theory

    PubMed Central

    Asakura, Nobuhiko; Inui, Toshio

    2016-01-01

    Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities. PMID:28082941

  6. A Bayesian Framework for False Belief Reasoning in Children: A Rational Integration of Theory-Theory and Simulation Theory.

    PubMed

    Asakura, Nobuhiko; Inui, Toshio

    2016-01-01

    Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.

  7. Differential cross sections for ionizations of H and H2 by 75 keV proton impact

    NASA Astrophysics Data System (ADS)

    Igarashi, A.; Gulyás, L.

    2018-02-01

    We have calculated total, partial and fully differential cross sections (FDCSs) for ionizations of H and H2 by 75 keV proton impact within the framework of the continuum-distorted-wave-eikonal-initial-state (CDW-EIS) approximation. Applying the single active electron model, the interaction between the projectile and the target ion is taken into account in the impact parameter picture. Extension of the CDW-EIS model to the molecular target is performed using the two-effective center approximation. The obtained results are compared with those of experimental and other theoretical data when available. The agreements between the theories and the experimental data are generally reasonable except for some cases of the FDCSs.

  8. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng Jing; Zhou Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple (V)-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise tomore » non-negligible errors compared with the precise results.« less

  9. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    NASA Astrophysics Data System (ADS)

    Cheng, Jing; Zhou, Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple V-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise to non-negligible errors compared with the precise results.

  10. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  11. Microwave Passive Ground-Based Retrievals of Cloud and Rain Liquid Water Path in Drizzling Clouds: Challenges and Possibilities

    DOE PAGES

    Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano; ...

    2017-08-11

    Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less

  12. Microwave Passive Ground-Based Retrievals of Cloud and Rain Liquid Water Path in Drizzling Clouds: Challenges and Possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano

    Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less

  13. Determination of transverse elastic constants of wood using a cylindrically orthotropic model

    Treesearch

    John C. Hermanson

    2003-01-01

    The arrangement of anatomical elements in the cross section of a tree can be characterized, at least to a first approximation, with a cylindrical coordinate system. It seems reasonable that the physical properties of wood in the transverse plane, therefore, would exhibit behaviour that is associated with this anatomical alignment. Most of the transverse properties of...

  14. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  15. Reaction-Diffusion-Delay Model for EPO/TNF-α Interaction in articular cartilage lesion abatement

    PubMed Central

    2012-01-01

    Background Injuries to articular cartilage result in the development of lesions that form on the surface of the cartilage. Such lesions are associated with articular cartilage degeneration and osteoarthritis. The typical injury response often causes collateral damage, primarily an effect of inflammation, which results in the spread of lesions beyond the region where the initial injury occurs. Results and discussion We present a minimal mathematical model based on known mechanisms to investigate the spread and abatement of such lesions. The first case corresponds to the parameter values listed in Table 1, while the second case has parameter values as in Table 2. In particular we represent the "balancing act" between pro-inflammatory and anti-inflammatory cytokines that is hypothesized to be a principal mechanism in the expansion properties of cartilage damage during the typical injury response. We present preliminary results of in vitro studies that confirm the anti-inflammatory activities of the cytokine erythropoietin (EPO). We assume that the diffusion of cytokines determine the spatial behavior of injury response and lesion expansion so that a reaction diffusion system involving chemical species and chondrocyte cell state population densities is a natural way to represent cartilage injury response. We present computational results using the mathematical model showing that our representation is successful in capturing much of the interesting spatial behavior of injury associated lesion development and abatement in articular cartilage. Further, we discuss the use of this model to study the possibility of using EPO as a therapy for reducing the amount of inflammation induced collateral damage to cartilage during the typical injury response. Table 1 Model Parameter Values for Results in Figure 5 Table of Parameter Values Corresponding to Simulations in Figure 5 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] D P 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0 micromolar ⋅ c m 2 day ⋅ cells Case with no anti-inflammatory response Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 0 1 day Case with no anti-inflammatory response β 1 100 1 day Approximated Β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Table 2 Model Parameter Values for Results in Figure 6 Table of Parameter Values Corresponding to Simulations in Figure 6 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] DP 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0.001 micromolar ⋅ c m 2 day ⋅ cells Approximated Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 10 1 day Approximated β 1 100 1 day Approximated β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Conclusions The mathematical model presented herein suggests that not only are anti-inflammatory cy-tokines, such as EPO necessary to prevent chondrocytes signaled by pro-inflammatory cytokines from entering apoptosis, they may also influence how chondrocytes respond to signaling by pro-inflammatory cytokines. Reviewers This paper has been reviewed by Yang Kuang, James Faeder and Anna Marciniak-Czochra. PMID:22353555

  16. A variable vertical resolution weather model with an explicitly resolved planetary boundary layer

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1981-01-01

    A version of the fourth order weather model incorporating surface wind stress data from SEASAT A scatterometer observations is presented. The Monin-Obukhov similarity theory is used to relate winds at the top of the surface layer to surface wind stress. A reasonable approximation of surface fluxes of heat, moisture, and momentum are obtainable using this method. A Richardson number adjustment scheme based on the ideas of Chang is used to allow for turbulence effects.

  17. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    PubMed

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  18. The possible modifications of the Hisse model for pure LANDSAT agricultural data

    NASA Technical Reports Server (NTRS)

    Peters, C.

    1982-01-01

    An idea, due to A. Feiveson, is presented for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field). Theoretical arguments are given which show that any significant refinement of the model beyond Feiveson's proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Guassian with a covariance function which exhibits exponential decay with respect to spatial separation.

  19. Modeling of confined turbulent fluid-particle flows using Eulerian and Lagrangian schemes

    NASA Technical Reports Server (NTRS)

    Adeniji-Fashola, A.; Chen, C. P.

    1990-01-01

    Two important aspects of fluid-particulate interaction in dilute gas-particle turbulent flows (the turbulent particle dispersion and the turbulence modulation effects) are addressed, using the Eulerian and Lagrangian modeling approaches to describe the particulate phase. Gradient-diffusion approximations are employed in the Eulerian formulation, while a stochastic procedure is utilized to simulate turbulent dispersion in the Lagrangina formulation. The k-epsilon turbulence model is used to characterize the time and length scales of the continuous phase turbulence. Models proposed for both schemes are used to predict turbulent fully-developed gas-solid vertical pipe flow with reasonable accuracy.

  20. The possible modifications of the HISSE model for pure LANDSAT agricultural data

    NASA Technical Reports Server (NTRS)

    Peters, C.

    1981-01-01

    A method for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field) is discussed. Theoretical arguments are given which show that any significant refinement of the model beyond this proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Gaussian with a covariance function which exhibits exponential decay with respect to spatial separation.

  1. Multi-state trajectory approach to non-adiabatic dynamics: General formalism and the active state trajectory approximation

    NASA Astrophysics Data System (ADS)

    Tao, Guohua

    2017-07-01

    A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean-field or the single-state limits, respectively. In the general multistate formalism, nuclear dynamics is represented in terms of a set of individual state-specific trajectories, while in the active state trajectory (AST) approximation, only one single nuclear trajectory on the active state is propagated with its augmented images running on all other states. The AST approximation combines the advantages of consistent nuclear-coupled electronic dynamics in the MM model and the single nuclear trajectory in the trajectory surface hopping (TSH) treatment and therefore may provide a potential alternative to both Ehrenfest and TSH methods. The resulting algorithm features in a consistent description of coupled electronic-nuclear dynamics and excellent numerical stability. The implementation of the MST approach to several benchmark systems involving multiple nonadiabatic transitions and conical intersection shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The AST treatment also reproduces the exact results reasonably, sometimes even quantitatively well, with a better performance in the adiabatic representation.

  2. Air pollution dispersion models for human exposure predictions in London.

    PubMed

    Beevers, Sean D; Kitwiroon, Nutthida; Williams, Martin L; Kelly, Frank J; Ross Anderson, H; Carslaw, David C

    2013-01-01

    The London household survey has shown that people travel and are exposed to air pollutants differently. This argues for human exposure to be based upon space-time-activity data and spatio-temporal air quality predictions. For the latter, we have demonstrated the role that dispersion models can play by using two complimentary models, KCLurban, which gives source apportionment information, and Community Multi-scale Air Quality Model (CMAQ)-urban, which predicts hourly air quality. The KCLurban model is in close agreement with observations of NO(X), NO(2) and particulate matter (PM)(10/2.5), having a small normalised mean bias (-6% to 4%) and a large Index of Agreement (0.71-0.88). The temporal trends of NO(X) from the CMAQ-urban model are also in reasonable agreement with observations. Spatially, NO(2) predictions show that within 10's of metres of major roads, concentrations can range from approximately 10-20 p.p.b. up to 70 p.p.b. and that for PM(10/2.5) central London roadside concentrations are approximately double the suburban background concentrations. Exposure to different PM sources is important and we predict that brake wear-related PM(10) concentrations are approximately eight times greater near major roads than at suburban background locations. Temporally, we have shown that average NO(X) concentrations close to roads can range by a factor of approximately six between the early morning minimum and morning rush hour maximum periods. These results present strong arguments for the hybrid exposure model under development at King's and, in future, for in-building models and a model for the London Underground.

  3. A numerical and experimental study on the nonlinear evolution of long-crested irregular waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701

    2011-01-15

    The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less

  4. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.

  5. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  6. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  7. Spallation studies in Estane

    NASA Astrophysics Data System (ADS)

    Johnson, J. N.; Dick, J. J.

    2000-04-01

    Data are presented for the spall fracture of Estane. Estane has been studied previously to determine its low-pressure Hugoniot properties and high-rate viscoelastic response [J.N. Johnson, J.J. Dick and R.S. Hixson, J. Appl. Phys. 84, 2520-2529, 1998]. These results are used in the current analysis of spall fracture data for this material. Calculations are carried out with the characteristics code CHARADE and the finite-difference code FIDO. Comparison of model calculations with experimental data show the onset of spall failure to occur when the longitudinal stress reaches approximately 130 MPa in tension. At this point complete material separation does not occur, but rather the tensile strength in the material falls to approximately one-half the value at onset, as determined by CHARADE calculations. Finite-difference calculations indicate that the standard void-growth model (used previously to describe spall in metals) gives a reasonable approximation to the dynamic failure process in Estane. [Research supported by the USDOE under contract W-7405-ENG-36

  8. Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model

    NASA Astrophysics Data System (ADS)

    Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott

    2017-08-01

    One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.

  9. Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.

    2013-07-01

    We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.

  10. Accelerating cross-validation with total variation and its application to super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki

    2017-12-01

    We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  11. Propagating Qualitative Values Through Quantitative Equations

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    1992-01-01

    In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.

  12. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  13. Thermally Driven One-Fluid Electron-Proton Solar Wind: Eight-Moment Approximation

    NASA Astrophysics Data System (ADS)

    Olsen, Espen Lyngdal; Leer, Egil

    1996-05-01

    In an effort to improve the "classical" solar wind model, we study an eight-moment approximation hydrodynamic solar wind model, in which the full conservation equation for the heat conductive flux is solved together with the conservation equations for mass, momentum, and energy. We consider two different cases: In one model the energy flux needed to drive the solar wind is supplied as heat flux from a hot coronal base, where both the density and temperature are specified. In the other model, the corona is heated. In that model, the coronal base density and temperature are also specified, but the temperature increases outward from the coronal base due to a specified energy flux that is dissipated in the corona. The eight-moment approximation solutions are compared with the results from a "classical" solar wind model in which the collision-dominated gas expression for the heat conductive flux is used. It is shown that the "classical" expression for the heat conductive flux is generally not valid in the solar wind. In collisionless regions of the flow, the eight-moment approximation gives a larger thermalization of the heat conductive flux than the models using the collision-dominated gas approximation for the heat flux, but the heat flux is still larger than the "saturation heat flux." This leads to a breakdown of the electron distribution function, which turns negative in the collisionless region of the flow. By increasing the interaction between the electrons, the heat flux is reduced, and a reasonable shape is obtained on the distribution function. By solving the full set of equations consistent with the eight-moment distribution function for the electrons, we are thus able to draw inferences about the validity of the eight-moment description of the solar wind as well as the validity of the very commonly used collision-dominated gas approximation for the heat conductive flux in the solar wind.

  14. Venus nightside ionosphere - A model with KeV electron impact ionization

    NASA Technical Reports Server (NTRS)

    Kumar, S.

    1982-01-01

    The impact of keV electrons is proposed as the strongest source of ionization in a full-up Venus nightside ionosphere model for the equatorial midnight region. The electron impacts lead to a peak ion density of 100,000/cu cm, which was observed by the PV-OIMS experiment on several occasions. In addition, the observed altitude profiles of CO2(+), O(+), O2(+), H(+), and H2(+) can be reproduced by the model on condition that the available keV electron flux is approximated by a reasonable extrapolation from fluxes observed at lower energies.

  15. COBE DMR-normalized open inflation cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  16. Fuzzy Logic for Incidence Geometry

    PubMed Central

    2016-01-01

    The paper presents a mathematical framework for approximate geometric reasoning with extended objects in the context of Geography, in which all entities and their relationships are described by human language. These entities could be labelled by commonly used names of landmarks, water areas, and so forth. Unlike single points that are given in Cartesian coordinates, these geographic entities are extended in space and often loosely defined, but people easily perform spatial reasoning with extended geographic objects “as if they were points.” Unfortunately, up to date, geographic information systems (GIS) miss the capability of geometric reasoning with extended objects. The aim of the paper is to present a mathematical apparatus for approximate geometric reasoning with extended objects that is usable in GIS. In the paper we discuss the fuzzy logic (Aliev and Tserkovny, 2011) as a reasoning system for geometry of extended objects, as well as a basis for fuzzification of the axioms of incidence geometry. The same fuzzy logic was used for fuzzification of Euclid's first postulate. Fuzzy equivalence relation “extended lines sameness” is introduced. For its approximation we also utilize a fuzzy conditional inference, which is based on proposed fuzzy “degree of indiscernibility” and “discernibility measure” of extended points. PMID:27689133

  17. Thermal refraction focusing in planar index-antiguided lasers.

    PubMed

    Casperson, Lee W; Dittli, Adam; Her, Tsing-Hua

    2013-03-15

    Thermal refraction focusing in planar index-antiguided lasers is investigated both theoretically and experimentally. An analytical model based on zero-field approximation is presented for treating the combined effects of index antiguiding and thermal focusing. At very low pumping power, the mode is antiguided by the amplifier boundary, whereas at high pumping power it narrows due to thermal focusing. Theoretical results are in reasonable agreement with experimental data.

  18. California's population geography: lessons for a fourth grade class.

    PubMed

    Rushdoony, H A

    1978-11-01

    Purpose of this paper is to present a model for teaching fourth grade children some aspects of the population geography of California from a nontextual approach. The objective is to interest and instruct children in the mobility of the people, and on the reasons why so many families have moved to California from other states. Students should be alerted not only to internal migration problems, but to the excess of births over deaths. Materials necessary for the lessons are transparencies, overhead projector, marking pencils, chalk and chalkboard. After showing the students that California population has approximately doubled every 20 years, the students should be encouraged to find reasons explaining why people have moved to the state, should be able to categorize those reasons under the terms industrial/manufacturing, agricultural, urban or recreational, should learn how to plot population distribution on a California regional outline map, and should attempt to explain why certain parts of California are more popular than others. The teaching model described in this paper may be replicated with modfications for any grade level and area of study.

  19. Efficient Posterior Probability Mapping Using Savage-Dickey Ratios

    PubMed Central

    Penny, William D.; Ridgway, Gerard R.

    2013-01-01

    Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640

  20. Feed system design and experimental results in the uhf model study for the proposed Urbana phased array

    NASA Technical Reports Server (NTRS)

    Loane, J. T.; Bowhill, S. A.; Mayes, P. E.

    1982-01-01

    The effects of atmospheric turbulence and the basis for the coherent scatter radar techniques are discussed. The reasons are given for upgrading the Radar system to a larger steerable array. Phase array theory pertinent to the system design is reviewed, along with approximations for maximum directive gain and blind angles due to mutual coupling. The methods and construction techniques employed in the UHF model study are explained. The antenna range is described, with a block diagram for the mode of operation used.

  1. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  2. Topological bifurcations in a model society of reasonable contrarians

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  3. Topological bifurcations in a model society of reasonable contrarians.

    PubMed

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  4. Proportional Reasoning and the Visually Impaired

    ERIC Educational Resources Information Center

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  5. Influence of proportional number relationships on item accessibility and students' strategies

    NASA Astrophysics Data System (ADS)

    Carney, Michele B.; Smith, Everett; Hughes, Gwyneth R.; Brendefur, Jonathan L.; Crawford, Angela

    2016-12-01

    Proportional reasoning is important to students' future success in mathematics and science endeavors. More specifically, students' fluent and flexible use of scalar and functional relationships to solve problems is critical to their ability to reason proportionally. The purpose of this study is to investigate the influence of systematically manipulating the location of an integer multiplier—to press the scalar or functional relationship—on item difficulty and student solution strategies. We administered short-answer assessment forms to 473 students in grades 6-8 (approximate ages 11-14) and analyzed the data quantitatively with the Rasch model to examine item accessibility and qualitatively to examine student solution strategies. We found that manipulating the location of the integer multiplier encouraged students to make use of different aspects of proportional relationships without decreasing item accessibility. Implications for proportional reasoning curricular materials, instruction, and assessment are addressed.

  6. Heat generation in aircraft tires under braked rolling conditions

    NASA Technical Reports Server (NTRS)

    Clark, S. K.; Dodge, R. N.

    1984-01-01

    An analytical model was developed to approximate the internal temperature distribution in an aircraft tire operating under conditions of unyawed braked rolling. The model employs an array of elements to represent the tire cross section and considers the heat generated within the tire to be caused by the change in strain energy associated with cyclic tire deflection. The additional heating due to tire slip and stresses induced by braking are superimposed on the previously developed free rolling model. An extensive experimental program was conducted to verify temperatures predicted from the analytical model. Data from these tests were compared with calculations over a range of operating conditions. The model results were in reasonably good agreement with measured values.

  7. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less

  8. THE REASONING METHODS AND REASONING ABILITY IN NORMAL AND MENTALLY RETARDED GIRLS AND THE REASONING ABILITY OF NORMAL AND MENTALLY RETARDED BOYS AND GIRLS.

    ERIC Educational Resources Information Center

    CAPOBIANCO, RUDOLPH J.; AND OTHERS

    A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…

  9. Effectiveness of the surgical safety checklist in correcting errors: a literature review applying Reason's Swiss cheese model.

    PubMed

    Collins, Susan J; Newhouse, Robin; Porter, Jody; Talsma, AkkeNeel

    2014-07-01

    Approximately 2,700 patients are harmed by wrong-site surgery each year. The World Health Organization created the surgical safety checklist to reduce the incidence of wrong-site surgery. A project team conducted a narrative review of the literature to determine the effectiveness of the surgical safety checklist in correcting and preventing errors in the OR. Team members used Swiss cheese model of error by Reason to analyze the findings. Analysis of results indicated the effectiveness of the surgical checklist in reducing the incidence of wrong-site surgeries and other medical errors; however, checklists alone will not prevent all errors. Successful implementation requires perioperative stakeholders to understand the nature of errors, recognize the complex dynamic between systems and individuals, and create a just culture that encourages a shared vision of patient safety. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  10. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  11. A full-wave Helmholtz model for continuous-wave ultrasound transmission.

    PubMed

    Huttunen, Tomi; Malinen, Matti; Kaipio, Jari P; White, Phillip Jason; Hynynen, Kullervo

    2005-03-01

    A full-wave Helmholtz model of continuous-wave (CW) ultrasound fields may offer several attractive features over widely used partial-wave approximations. For example, many full-wave techniques can be easily adjusted for complex geometries, and multiple reflections of sound are automatically taken into account in the model. To date, however, the full-wave modeling of CW fields in general 3D geometries has been avoided due to the large computational cost associated with the numerical approximation of the Helmholtz equation. Recent developments in computing capacity together with improvements in finite element type modeling techniques are making possible wave simulations in 3D geometries which reach over tens of wavelengths. The aim of this study is to investigate the feasibility of a full-wave solution of the 3D Helmholtz equation for modeling of continuous-wave ultrasound fields in an inhomogeneous medium. The numerical approximation of the Helmholtz equation is computed using the ultraweak variational formulation (UWVF) method. In addition, an inverse problem technique is utilized to reconstruct the velocity distribution on the transducer which is used to model the sound source in the UWVF scheme. The modeling method is verified by comparing simulated and measured fields in the case of transmission of 531 kHz CW fields through layered plastic plates. The comparison shows a reasonable agreement between simulations and measurements at low angles of incidence but, due to mode conversion, the Helmholtz model becomes insufficient for simulating ultrasound fields in plates at large angles of incidence.

  12. Two-dimensional character of internal rotation of furfural and other five-member heterocyclic aromatic aldehydes

    NASA Astrophysics Data System (ADS)

    Bataev, Vadim A.; Pupyshev, Vladimir I.; Godunov, Igor A.

    2016-05-01

    The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem.

  13. Toward Webscale, Rule-Based Inference on the Semantic Web Via Data Parallelism

    DTIC Science & Technology

    2013-02-01

    Another work distinct from its peers is the work on approximate reasoning by Rudolph et al. [34] in which multiple inference sys- tems were combined not...Workshop Scalable Semantic Web Knowledge Base Systems, 2010, pp. 17–31. [34] S. Rudolph , T. Tserendorj, and P. Hitzler, “What is approximate reasoning...2013] [55] M. Duerst and M. Suignard. (2005, Jan .). RFC 3987 – internationalized resource identifiers (IRIs). IETF. [Online]. Available: http

  14. Comparison of techniques that use the single scattering model to compute the quality factor Q from coda waves

    USGS Publications Warehouse

    Novelo-Casanova, D. A.; Lee, W.H.K.

    1991-01-01

    Using simulated coda waves, the resolution of the single-scattering model to extract coda Q (Qc) and its power law frequency dependence was tested. The back-scattering model of Aki and Chouet (1975) and the single isotropic-scattering model of Sato (1977) were examined. The results indicate that: (1) The input Qc models are reasonably well approximated by the two methods; (2) almost equal Qc values are recovered when the techniques sample the same coda windows; (3) low Qc models are well estimated in the frequency domain from the early and late part of the coda; and (4) models with high Qc values are more accurately extracted from late code measurements. ?? 1991 Birkha??user Verlag.

  15. A new approach to estimate parameters of speciation models with application to apes.

    PubMed

    Becquet, Celine; Przeworski, Molly

    2007-10-01

    How populations diverge and give rise to distinct species remains a fundamental question in evolutionary biology, with important implications for a wide range of fields, from conservation genetics to human evolution. A promising approach is to estimate parameters of simple speciation models using polymorphism data from multiple loci. Existing methods, however, make a number of assumptions that severely limit their applicability, notably, no gene flow after the populations split and no intralocus recombination. To overcome these limitations, we developed a new Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. The approach uses summaries of polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and, importantly, allows for intralocus recombination. To illustrate its potential, we applied it to extensive polymorphism data from populations and species of apes, whose demographic histories are largely unknown. The isolation-migration model appears to provide a reasonable fit to the data. It suggests that the two chimpanzee species became reproductively isolated in allopatry approximately 850 Kya, while Western and Central chimpanzee populations split approximately 440 Kya but continued to exchange migrants. Similarly, Eastern and Western gorillas and Sumatran and Bornean orangutans appear to have experienced gene flow since their splits approximately 90 and over 250 Kya, respectively.

  16. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  17. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  18. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  19. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE PAGES

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...

    2016-05-20

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  20. Model Based Reasoning by Introductory Students When Analyzing Earth Systems and Societal Challenges

    NASA Astrophysics Data System (ADS)

    Holder, L. N.; Herbert, B. E.

    2014-12-01

    Understanding how students use their conceptual models to reason about societal challenges involving societal issues such as natural hazard risk assessment, environmental policy and management, and energy resources can improve instructional activity design that directly impacts student motivation and literacy. To address this question, we created four laboratory exercises for an introductory physical geology course at Texas A&M University that engages students in authentic scientific practices by using real world problems and issues that affect societies based on the theory of situated cognition. Our case-study design allows us to investigate the various ways that students utilize model based reasoning to identify and propose solutions to societally relevant issues. In each of the four interventions, approximately 60 students in three sections of introductory physical geology were expected to represent and evaluate scientific data, make evidence-based claims about the data trends, use those claims to express conceptual models, and use their models to analyze societal challenges. Throughout each step of the laboratory exercise students were asked to justify their claims, models, and data representations using evidence and through the use of argumentation with peers. Cognitive apprenticeship was the foundation for instruction used to scaffold students so that in the first exercise they are given a partially completed model and in the last exercise students are asked to generate a conceptual model on their own. Student artifacts, including representation of earth systems, representation of scientific data, verbal and written explanations of models and scientific arguments, and written solutions to specific societal issues or environmental problems surrounding earth systems, were analyzed through the use of a rubric that modeled authentic expertise and students were sorted into three categories. Written artifacts were examined to identify student argumentation and justifications of solutions through the use of evidence and reasoning. Higher scoring students justified their solutions through evidence-based claims, while lower scoring students typically justified their solutions using anecdotal evidence, emotional ideologies, and naive and incomplete conceptions of earth systems.

  1. Influence of collision on the flow through in-vitro rigid models of the vocal folds

    NASA Astrophysics Data System (ADS)

    Deverge, M.; Pelorson, X.; Vilain, C.; Lagrée, P.-Y.; Chentouf, F.; Willems, J.; Hirschberg, A.

    2003-12-01

    Measurements of pressure in oscillating rigid replicas of vocal folds are presented. The pressure upstream of the replica is used as input to various theoretical approximations to predict the pressure within the glottis. As the vocal folds collide the classical quasisteady boundary layer theory fails. It appears however that for physiologically reasonable shapes of the replicas, viscous effects are more important than the influence of the flow unsteadiness due to the wall movement. A simple model based on a quasisteady Bernoulli equation corrected for viscous effect, combined with a simple boundary layer separation model does globally predict the observed pressure behavior.

  2. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  3. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  4. A model teaching session for the hypothesis-driven physical examination.

    PubMed

    Nishigori, Hiroshi; Masuda, Kozo; Kikukawa, Makoto; Kawashima, Atsushi; Yudkowsky, Rachel; Bordage, Georges; Otaki, Junji

    2011-01-01

    The physical examination is an essential clinical competence for all physicians. Most medical schools have students who learn the physical examination maneuvers using a head-to-toe approach. However, this promotes a rote approach to the physical exam, and it is not uncommon for students later on to fail to appreciate the meaning of abnormal findings and their contribution to the diagnostic reasoning process. The purpose of the project was to develop a model teaching session for the hypothesis-driven physical examination (HDPE) approach in which students could practice the physical examination in the context of diagnostic reasoning. We used an action research methodology to create this HDPE model by developing a teaching session, implementing it over 100 times with approximately 700 students, conducting internal reflection and external evaluations, and making adjustments as needed. A model nine-step HDPE teaching session was developed, including: (1) orientation, (2) anticipation, (3) preparation, (4) role play, (5) discussion-1, (6) answers, (7) discussion-2, (8) demonstration and (9) reflection. A structured model HDPE teaching session and tutor guide were developed into a workable instructional intervention. Faculty members are invited to teach the physical examination using this model.

  5. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  6. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  7. Variational models for discontinuity detection

    NASA Astrophysics Data System (ADS)

    Vitti, Alfonso; Battista Benciolini, G.

    2010-05-01

    The Mumford-Shah variational model produces a smooth approximation of the data and detects data discontinuities by solving a minimum problem involving an energy functional. The Blake-Zisserman model permits also the detection of discontinuities in the first derivative of the approximation. This model can result in a quasi piece-wise linear approximation, whereas the Mumford-Shah can result in a quasi piece-wise constant approximation. The two models are well known in the mathematical literature and are widely adopted in computer vision for image segmentation. In Geodesy the Blake-Zisserman model has been applied successfully to the detection of cycle-slips in linear combinations of GPS measurements. Few attempts to apply the model to time series of coordinates have been done so far. The problem of detecting discontinuities in time series of GNSS coordinates is well know and its relevance increases as the quality of geodetic measurements, analysis techniques, models and products improves. The application of the Blake-Zisserman model appears reasonable and promising due to the model characteristic to detect both position and velocity discontinuities in the same time series. The detection of position and velocity changes is of great interest in geophysics where the discontinuity itself can be the very relevant object. In the work for the realization of reference frames, detecting position and velocity discontinuities may help to define models that can handle non-linear motions. In this work the Mumford-Shah and the Blake-Zisserman models are briefly presented, the treatment is carried out from a practical viewpoint rather than from a theoretical one. A set of time series of GNSS coordinates has been processed and the results are presented in order to highlight the capabilities and the weakness of the variational approach. A first attempt to derive some indication for the automatic set up of the model parameters has been done. The underlying relation that could links the parameter values to the statistical properties of the data has been investigated.

  8. Metacognition and reasoning

    PubMed Central

    Fletcher, Logan; Carruthers, Peter

    2012-01-01

    This article considers the cognitive architecture of human meta-reasoning: that is, metacognition concerning one's own reasoning and decision-making. The view we defend is that meta-reasoning is a cobbled-together skill comprising diverse self-management strategies acquired through individual and cultural learning. These approximate the monitoring-and-control functions of a postulated adaptive system for metacognition by recruiting mechanisms that were designed for quite other purposes. PMID:22492753

  9. 38 CFR 3.102 - Reasonable doubt.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... degree of disability, or any other point, such doubt will be resolved in favor of the claimant. By reasonable doubt is meant one which exists because of an approximate balance of positive and negative...

  10. Photoionization of disk galaxies: An explanation of the sharp edges in the H I distribution

    NASA Technical Reports Server (NTRS)

    Dove, James B.; Shull, J. Michael

    1994-01-01

    We have reproduced the observed radial truncation of the H I distribution in isolated spiral galaxies with a model in which extragalactic radiation photoionizes the gaseous disk. For a galactic mass distribution model that reproduces the observed rotation curves, including dark matter in the disk and halo, the vertical structure of the gas is determined self-consistently. The ionization structure and column densities of H and He ions are computed by solving the radiation transfer equation for both continuum and lines. Our model is similar to that of Maloney, and the H I structure differs by less than 10%. The radial structure of the column density of H I is found to be more sensitive to the extragalactic radiation field than to the distribution of mass. For this reason, considerable progress can be made in determining the extragalactic flux of ionizing photons, phi(sub ex), with more 21 cm observations of isolated galaxies. However, owing to the uncertainty of the radial distribution of total hydrogen at large radii, inferring the extragalactic flux by comparing the observed edges to photoionization models is somewhat subjective. We find 1 x 10(exp 4)/sq cm/s is less than or approximately phi(sub ex) is less than or approximately 5 x 10(exp 4)/sq cm/s, corresponding to 2.1 is less than or approximately iota(sub 0) is less than or approximately 10.5 x 10(exp -23) ergs/sq cm/s/Hz/sr for a 1/nu spectrum. Although somewhat higher, our inferred range of iota(sub 0) is consistent with the large range of values obtained by Kulkarni & Fall from the 'proximity effect' toward Quasi-Stellar Objects (QSOs) at approximately 0.5.

  11. Testing hadronic interaction models using a highly granular silicon-tungsten calorimeter

    NASA Astrophysics Data System (ADS)

    Bilki, B.; Repond, J.; Schlereth, J.; Xia, L.; Deng, Z.; Li, Y.; Wang, Y.; Yue, Q.; Yang, Z.; Eigen, G.; Mikami, Y.; Price, T.; Watson, N. K.; Thomson, M. A.; Ward, D. R.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Cârloganu, C.; Chang, S.; Khan, A.; Kim, D. H.; Kong, D. J.; Oh, Y. D.; Blazey, G. C.; Dyshkant, A.; Francis, K.; Lima, J. G. R.; Salcido, P.; Zutshi, V.; Boisvert, V.; Green, B.; Misiejuk, A.; Salvatore, F.; Kawagoe, K.; Miyazaki, Y.; Sudo, Y.; Suehara, T.; Tomita, T.; Ueno, H.; Yoshioka, T.; Apostolakis, J.; Folger, G.; Ivantchenko, V.; Ribon, A.; Uzhinskiy, V.; Cauwenbergh, S.; Tytgat, M.; Zaganidis, N.; Hostachy, J.-Y.; Morin, L.; Gadow, K.; Göttlicher, P.; Günter, C.; Krüger, K.; Lutz, B.; Reinecke, M.; Sefkow, F.; Feege, N.; Garutti, E.; Laurien, S.; Lu, S.; Marchesini, I.; Matysek, M.; Ramilli, M.; Kaplan, A.; Norbeck, E.; Northacker, D.; Onel, Y.; Kim, E. J.; van Doren, B.; Wilson, G. W.; Wing, M.; Bobchenko, B.; Chadeeva, M.; Chistov, R.; Danilov, M.; Drutskoy, A.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Besson, D.; Popova, E.; Gabriel, M.; Kiesling, C.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M. S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph.; Dulucq, F.; Faucci-Giannelli, M.; Fleury, J.; Frisson, T.; Kégl, B.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de La Taille, Ch.; Pöschl, R.; Raux, L.; Rouëné, J.; Seguin-Moreau, N.; Anduze, M.; Balagura, V.; Becheva, E.; Boudry, V.; Brient, J.-C.; Cornat, R.; Frotin, M.; Gastaldi, F.; Magniette, F.; Matthieu, A.; Mora de Freitas, P.; Videau, H.; Augustin, J.-E.; David, J.; Ghislain, P.; Lacour, D.; Lavergne, L.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Jeans, D.; Götze, M.; Calice Collaboration

    2015-09-01

    A detailed study of hadronic interactions is presented using data recorded with the highly granular CALICE silicon-tungsten electromagnetic calorimeter. Approximately 350,000 selected π- events at energies between 2 and 10 GeV have been studied. The predictions of several physics models available within the GEANT4 simulation tool kit are compared to this data. A reasonable overall description of the data is observed; the Monte Carlo predictions are within 20% of the data, and for many observables much closer. The largest quantitative discrepancies are found in the longitudinal and transverse distributions of reconstructed energy.

  12. Nonthermal steady states after an interaction quench in the Falicov-Kimball model.

    PubMed

    Eckstein, Martin; Kollar, Marcus

    2008-03-28

    We present the exact solution of the Falicov-Kimball model after a sudden change of its interaction parameter using nonequilibrium dynamical mean-field theory. For different interaction quenches between the homogeneous metallic and insulating phases the system relaxes to a nonthermal steady state on time scales on the order of variant Planck's over 2pi/bandwidth, showing collapse and revival with an approximate period of h/interaction if the interaction is large. We discuss the reasons for this behavior and provide a statistical description of the final steady state by means of generalized Gibbs ensembles.

  13. Approximate Dispersion Relations for Waves on Arbitrary Shear Flows

    NASA Astrophysics Data System (ADS)

    Ellingsen, S. À.; Li, Y.

    2017-12-01

    An approximate dispersion relation is derived and presented for linear surface waves atop a shear current whose magnitude and direction can vary arbitrarily with depth. The approximation, derived to first order of deviation from potential flow, is shown to produce good approximations at all wavelengths for a wide range of naturally occuring shear flows as well as widely used model flows. The relation reduces in many cases to a 3-D generalization of the much used approximation by Skop (1987), developed further by Kirby and Chen (1989), but is shown to be more robust, succeeding in situations where the Kirby and Chen model fails. The two approximations incur the same numerical cost and difficulty. While the Kirby and Chen approximation is excellent for a wide range of currents, the exact criteria for its applicability have not been known. We explain the apparently serendipitous success of the latter and derive proper conditions of applicability for both approximate dispersion relations. Our new model has a greater range of applicability. A second order approximation is also derived. It greatly improves accuracy, which is shown to be important in difficult cases. It has an advantage over the corresponding second-order expression proposed by Kirby and Chen that its criterion of accuracy is explicitly known, which is not currently the case for the latter to our knowledge. Our second-order term is also arguably significantly simpler to implement, and more physically transparent, than its sibling due to Kirby and Chen.Plain Language SummaryIn order to answer key questions such as how the ocean surface affects the climate, erodes the coastline and transports nutrients, we must understand how waves move. This is not so easy when depth varying currents are present, as they often are in coastal waters. We have developed a modeling tool for accurately predicting wave properties in such situations, ready for use, for example, in the complex oceanographic computer models. Our method is robust and works well in situations where the tool currently used will fail. In addition to predicting the speed of waves of different lengths and directions, it is important to know something about how accurate the prediction is, and as a worst case, whether it is reasonable at all. This has not been possible before, but we provide a way to answer both questions in a straightforward manner.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AIPC.1522...86A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AIPC.1522...86A"><span>Comparison of heaving buoy and oscillating flap wave energy converters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.</p> <p>2013-04-01</p> <p>Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25370008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25370008"><span>Xenobiotic-metabolizing enzymes in the skin of rat, mouse, pig, guinea pig, man, and in human skin models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oesch, F; Fabian, E; Guth, K; Landsiedel, R</p> <p>2014-12-01</p> <p>The exposure of the skin to medical drugs, skin care products, cosmetics, and other chemicals renders information on xenobiotic-metabolizing enzymes (XME) in the skin highly interesting. Since the use of freshly excised human skin for experimental investigations meets with ethical and practical limitations, information on XME in models comes in the focus including non-human mammalian species and in vitro skin models. This review attempts to summarize the information available in the open scientific literature on XME in the skin of human, rat, mouse, guinea pig, and pig as well as human primary skin cells, human cell lines, and reconstructed human skin models. The most salient outcome is that much more research on cutaneous XME is needed for solid metabolism-dependent efficacy and safety predictions, and the cutaneous metabolism comparisons have to be viewed with caution. Keeping this fully in mind at least with respect to some cutaneous XME, some models may tentatively be considered to approximate reasonable closeness to human skin. For dermal absorption and for skin irritation among many contributing XME, esterase activity is of special importance, which in pig skin, some human cell lines, and reconstructed skin models appears reasonably close to human skin. With respect to genotoxicity and sensitization, activating XME are not yet judgeable, but reactive metabolite-reducing XME in primary human keratinocytes and several reconstructed human skin models appear reasonably close to human skin. For a more detailed delineation and discussion of the severe limitations see the "Overview and Conclusions" section in the end of this review.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26971025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26971025"><span>Two-dimensional character of internal rotation of furfural and other five-member heterocyclic aromatic aldehydes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bataev, Vadim A; Pupyshev, Vladimir I; Godunov, Igor A</p> <p>2016-05-15</p> <p>The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998WRR....34.3595L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998WRR....34.3595L"><span>Using partial site aggregation to reduce bias in random utility travel cost models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lupi, Frank; Feather, Peter M.</p> <p>1998-12-01</p> <p>We propose a "partial aggregation" strategy for defining the recreation sites that enter choice sets in random utility models. Under the proposal, the most popular sites and sites that will be the subject of policy analysis enter choice sets as individual sites while remaining sites are aggregated into groups of similar sites. The scheme balances the desire to include all potential substitute sites in the choice sets with practical data and modeling constraints. Unlike fully aggregate models, our analysis and empirical applications suggest that the partial aggregation approach reasonably approximates the results of a disaggregate model. The partial aggregation approach offers all of the data and computational advantages of models with aggregate sites but does not suffer from the same degree of bias as fully aggregate models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22518736-steady-state-model-solar-wind-electrons-revisited','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22518736-steady-state-model-solar-wind-electrons-revisited"><span>STEADY-STATE MODEL OF SOLAR WIND ELECTRONS REVISITED</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yoon, Peter H.; Kim, Sunjung; Choe, G. S., E-mail: yoonp@umd.edu</p> <p>2015-10-20</p> <p>In a recent paper, Kim et al. put forth a steady-state model for the solar wind electrons. The model assumed local equilibrium between the halo electrons, characterized by an intermediate energy range, and the whistler-range fluctuations. The basic wave–particle interaction is assumed to be the cyclotron resonance. Similarly, it was assumed that a dynamical steady state is established between the highly energetic superhalo electrons and high-frequency Langmuir fluctuations. Comparisons with the measured solar wind electron velocity distribution function (VDF) during quiet times were also made, and reasonable agreements were obtained. In such a model, however, only the steady-state solution for themore » Fokker–Planck type of electron particle kinetic equation was considered. The present paper complements the previous analysis by considering both the steady-state particle and wave kinetic equations. It is shown that the model halo and superhalo electron VDFs, as well as the assumed wave intensity spectra for the whistler and Langmuir fluctuations, approximately satisfy the quasi-linear wave kinetic equations in an approximate sense, thus further validating the local equilibrium model constructed in the paper by Kim et al.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70016206','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70016206"><span>Theory and application of an approximate model of saltwater upconing in aquifers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>McElwee, C.; Kemblowski, M.</p> <p>1990-01-01</p> <p>Motion and mixing of salt water and fresh water are vitally important for water-resource development throughout the world. An approximate model of saltwater upconing in aquifers is developed, which results in three non-linear coupled equations for the freshwater zone, the saltwater zone, and the transition zone. The description of the transition zone uses the concept of a boundary layer. This model invokes some assumptions to give a reasonably tractable model, considerably better than the sharp interface approximation but considerably simpler than a fully three-dimensional model with variable density. We assume the validity of the Dupuit-Forchheimer approximation of horizontal flow in each layer. Vertical hydrodynamic dispersion into the base of the transition zone is assumed and concentration of the saltwater zone is assumed constant. Solute in the transition zone is assumed to be moved by advection only. Velocity and concentration are allowed to vary vertically in the transition zone by using shape functions. Several numerical techniques can be used to solve the model equations, and simple analytical solutions can be useful in validating the numerical solution procedures. We find that the model equations can be solved with adequate accuracy using the procedures presented. The approximate model is applied to the Smoky Hill River valley in central Kansas. This model can reproduce earlier sharp interface results as well as evaluate the importance of hydrodynamic dispersion for feeding salt water to the river. We use a wide range of dispersivity values and find that unstable upconing always occurs. Therefore, in this case, hydrodynamic dispersion is not the only mechanism feeding salt water to the river. Calculations imply that unstable upconing and hydrodynamic dispersion could be equally important in transporting salt water. For example, if groundwater flux to the Smoky Hill River were only about 40% of its expected value, stable upconing could exist where hydrodynamic dispersion into a transition zone is the primary mechanism for moving salt water to the river. The current model could be useful in situations involving dense saltwater layers. ?? 1990.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6642106-survey-hepa-filter-experience','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6642106-survey-hepa-filter-experience"><span>Survey of HEPA filter experience</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Carbaugh, E.H.</p> <p>1982-07-01</p> <p>A survey of high efficiency particulate air (HEPA) filter applications and experience at Department of Energy (DOE) sites was conducted to provide an overview of the reasons and magnitude of HEPA filter changeouts and failures. Results indicated that approximately 58% of the filters surveyed were changed out in the three year study period, and some 18% of all filters were changed out more than once. Most changeouts (63%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. Other reasons for changeout included leak-test failure (15%), preventive maintenance service life limit (13%), suspectedmore » damage (5%) and radiation buildup (4%). Filter failures occurred with approximately 12% of all installed filters. Of these failures, most (64%) occurred for unknown or unreported reasons. Handling or installation damage accounted for an additional 19% of reported failures. Media ruptures, filter-frame failures and seal failures each accounted for approximately 5 to 6% of the reported failures.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800065000&hterms=gans&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dgans','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800065000&hterms=gans&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dgans"><span>Energy conservation - A test for scattering approximations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Acquista, C.; Holland, A. C.</p> <p>1980-01-01</p> <p>The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2012-07-16/pdf/2012-17236.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2012-07-16/pdf/2012-17236.pdf"><span>77 FR 41742 - In the Matter of: Humane Restraint, Inc., 912 Bethel Circle, Waunakee, WI 53597, Respondent...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2012-07-16</p> <p>... under Export Control Classification Number (``ECCN'') 0A982, controlled for Crime Control reasons, and..., classified under ECCN 0A982, controlled for Crime Control reasons, and valued at approximately $112, from the... kit, items classified under ECCN 0A982, controlled for Crime Control reasons, and valued at...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27528701','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27528701"><span>Barriers to becoming a female surgeon and the influence of female surgical role models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kerr, Hui-Ling; Armstrong, Lesley Ann; Cade, Jennifer Ellen</p> <p>2016-10-01</p> <p>We aim to investigate the reasons that medical students and junior doctors who are women are less likely to pursue a career in surgery compared with their male counterparts. An anonymous questionnaire was distributed to female final year medical students and female junior doctors in two UK hospitals between August and September 2012. Topics included career choice, attitudes to surgery, recognition of female surgical role models and perceived sexual discrimination. 50 medical students and 50 junior doctors were given our survey. We received a 96% response rate; 46 medical students and 50 junior doctors. 6/50 (12%) junior doctors planned a career in surgery compared with 14/46 (30%) medical students. 'Work-life balance' was the main reason cited for not wishing to pursue surgery (29/46 (63%) medical students and 25/50 (50%) junior doctors). 28/46 (61%) medical students and 28/50 (56%) junior doctors had encountered a female surgical role model; only five students and two junior doctors felt that these were influential in their career decision. Of those who had not, approximately 40% in each group felt that if they had, they may have considered surgery. Approximately 30% in each group had encountered female surgeons that had dissuaded them from a surgical career. Work-life balance is still cited by female junior doctors as being the main deterrent to a surgical career. The paucity of female role models and some perceived sexual discrimination may cause female doctors to discount surgery as a career. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12452573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12452573"><span>Scale model experimentation: using terahertz pulses to study light scattering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pearce, Jeremy; Mittleman, Daniel M</p> <p>2002-11-07</p> <p>We describe a new class of experiments involving applications of terahertz radiation to problems in biomedical imaging and diagnosis. These involve scale model measurements, in which information can be gained about pulse propagation in scattering media. Because of the scale invariance of Maxwell's equations, these experiments can provide insight for researchers working on similar problems at shorter wavelengths. As a first demonstration, we measure the propagation constants for pulses in a dense collection of spherical scatterers, and compare with the predictions of the quasi-crystalline approximation. Even though the fractional volume in our measurements exceeds the limit of validity of this model, we find that it still predicts certain features of the propagation with reasonable accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JAP...118d4502M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JAP...118d4502M"><span>Analytic drain current model for III-V cylindrical nanowire transistors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marin, E. G.; Ruiz, F. G.; Schmidt, V.; Godoy, A.; Riel, H.; Gámiz, F.</p> <p>2015-07-01</p> <p>An analytical model is proposed to determine the drain current of III-V cylindrical nanowires (NWs). The model uses the gradual channel approximation and takes into account the complete analytical solution of the Poisson and Schrödinger equations for the Γ-valley and for an arbitrary number of subbands. Fermi-Dirac statistics are considered to describe the 1D electron gas in the NWs, being the resulting recursive Fermi-Dirac integral of order -1/2 successfully integrated under reasonable assumptions. The model has been validated against numerical simulations showing excellent agreement for different semiconductor materials, diameters up to 40 nm, gate overdrive biases up to 0.7 V, and densities of interface states up to 1013eV-1cm-2 .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830026137','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830026137"><span>Field measurements, simulation modeling and development of analysis for moisture stressed corn and soybeans, 1982 studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blad, B. L.; Norman, J. M.; Gardner, B. R.</p> <p>1983-01-01</p> <p>The experimental design, data acquisition and analysis procedures for agronomic and reflectance data acquired over corn and soybeans at the Sandhills Agricultural Laboratory of the University of Nebraska are described. The following conclusions were reached: (1) predictive leaf area estimation models can be defined which appear valid over a wide range of soils; (2) relative grain yield estimates over moisture stressed corn were improved by combining reflectance and thermal data; (3) corn phenology estimates using the model of Badhwar and Henderson (1981) exhibited systematic bias but were reasonably accurate; (4) canopy reflectance can be modelled to within approximately 10% of measured values; and (5) soybean pubescence significantly affects canopy reflectance, energy balance and water use relationships.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19720013217&hterms=penny&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dpenny','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19720013217&hterms=penny&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dpenny"><span>An analysis of the booster plume impingement environment during the space shuttle nominal staging maneuver</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wojciechowski, C. J.; Penny, M. M.; Greenwood, T. F.; Fossler, I. H.</p> <p>1972-01-01</p> <p>An experimental study of the plume impingement heating on the space shuttle booster afterbody resulting from the space shuttle orbiter engine plumes was conducted. The 1/100-scale model tests consisted of one and two orbiter engine firings on a flat plate, a flat plate with a fin, and a cylinder model. The plume impingement heating rates on these surfaces were measured using thin film heat transfer gages. Results indicate the engine simulation is a reasonable approximation to the two engine configuration, but more tests are needed to verify the plume model of the main engine configuration. For impingment, results show models experienced laminar boundary layer convective heating. Therefore, tests at higher Reynolds numbers are needed to determine impingment heating.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790037257&hterms=process+costing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dprocess%2Bcosting','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790037257&hterms=process+costing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dprocess%2Bcosting"><span>Costing the satellite power system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hazelrigg, G. A., Jr.</p> <p>1978-01-01</p> <p>The paper presents a methodology for satellite power system costing, places approximate limits on the accuracy possible in cost estimates made at this time, and outlines the use of probabilistic cost information in support of the decision-making process. Reasons for using probabilistic costing or risk analysis procedures instead of standard deterministic costing procedures are considered. Components of cost, costing estimating relationships, grass roots costing, and risk analysis are discussed. Risk analysis using a Monte Carlo simulation model is used to estimate future costs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060044259&hterms=bricks&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbricks','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060044259&hterms=bricks&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbricks"><span>The 'Brick Wall' radio loss approximation and the performance of strong channel codes for deep space applications at high data rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shambayati, Shervin</p> <p>2001-01-01</p> <p>In order to evaluate performance of strong channel codes in presence of imperfect carrier phase tracking for residual carrier BPSK modulation in this paper an approximate 'brick wall' model is developed which is independent of the channel code type for high data rates. It is shown that this approximation is reasonably accurate (less than 0.7dB for low FERs for (1784,1/6) code and less than 0.35dB for low FERs for (5920,1/6) code). Based on the approximation's accuracy, it is concluded that the effects of imperfect carrier tracking are more or less independent of the channel code type for strong channel codes. Therefore, the advantage that one strong channel code has over another with perfect carrier tracking translates to nearly the same advantage under imperfect carrier tracking conditions. This will allow the link designers to incorporate projected channel code performance of strong channel codes into their design tables without worrying about their behavior in the face of imperfect carrier phase tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17910701','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17910701"><span>Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kärkkäinen, Salme; Lantuéjoul, Christian</p> <p>2007-10-01</p> <p>We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950034746&hterms=Ocean+Stratification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DOcean%2BStratification','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950034746&hterms=Ocean+Stratification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DOcean%2BStratification"><span>Bio-optical and physical variability in the subarctic North Atlantic Ocean during the spring of 1989</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dickey, T.; Marra, J.; Stramska, M.; Langdon, C.; Granata, T.; Plueddemann, A.; Weller, R.; Yoder, J.</p> <p>1994-01-01</p> <p>A unique set of physical, bio-optical, and meteorological observations were made from a mooring located in the open ocean south of Iceland (59 deg 29.5 min N, 20 deg 49.8 min W) from April 13 to June 12 1989. The present measurements are apparently the first to resolve the rapid transition to springtime physical and biological conditions at such a high latitude site. Our data were collected with bio-optical and physical moored systems every few minutes. The abrupt onset of springtime stratification was observed with the mixed layer shoaling from approximately 550 m to approximately 50 m in approximately 5 days. During this period a major phytoplankton bloom occurred with a tenfold increase in near-surface chlorophyll concentration in less than 3 weeks. Our statistical analysis indicates that the velocity shear in the upper layer is driven primarily by local wind stress. Mesoscale variability is also apparent from these and concurrent airborne oceangraphic lidar observations. Our complementary modeling results suggest that the near-surface layer may be reasonably well described by a one-dimensional model and that the spring bloom was initiated during incipient near-surface restratification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......494S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......494S"><span>Studies of porous anodic alumina using spin echo scattering angle measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stonaha, Paul</p> <p></p> <p>The properties of a neutron make it a useful tool for use in scattering experiments. We have developed a method, dubbed SESAME, in which specially designed magnetic fields encode the scattering signal of a neutron beam into the beam's average Larmor phase. A geometry is presented that delivers the correct Larmor phase (to first order), and it is shown that reasonable variations of the geometry do not significantly affect the net Larmor phase. The solenoids are designed using an analytic approximation. Comparison of this approximate function with finite element calculations and Hall probe measurements confirm its validity, allowing for fast computation of the magnetic fields. The coils were built and tested in-house on the NBL-4 instrument, a polarized neutron reflectometer whose construction is another major portion of this work. Neutron scattering experiments using the solenoids are presented, and the scattering signal from porous anodic alumina is investigated in detail. A model using the Born Approximation is developed and compared against the scattering measurements. Using the model, we define the necessary degree of alignment of such samples in a SESAME measurement, and we show how the signal retrieved using SESAME is sensitive to range of detectable momentum transfer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3250073C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3250073C"><span>Numerical treatment for solving two-dimensional space-fractional advection-dispersion equation using meshless method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng</p> <p>2018-02-01</p> <p>Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.1788W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.1788W"><span>Predicting herbicide and biocide concentrations in rivers across Switzerland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wemyss, Devon; Honti, Mark; Stamm, Christian</p> <p>2014-05-01</p> <p>Pesticide concentrations vary strongly in space and time. Accordingly, intensive sampling is required to achieve a reliable quantification of pesticide pollution. As this requires substantial resources, loads and concentration ranges in many small and medium streams remain unknown. Here, we propose partially filling the information gap for herbicides and biocides by using a modelling approach that predicts stream concentrations without site-specific calibration simply based on generally available data like land use, discharge and nation-wide consumption data. The simple, conceptual model distinguishes herbicide losses from agricultural fields, private gardens and biocide losses from buildings (facades, roofs). The herbicide model is driven by river discharge and the applied herbicide mass; the biocide model requires precipitation and the footprint area of urban areas containing the biocide. The model approach allows for modelling concentrations across multiple catchments at the daily, or shorter, time scale and for small to medium-sized catchments (1 - 100 km2). Four high resolution sampling campaigns in the Swiss Plateau were used to calibrate the model parameters for six model compounds: atrazine, metolachlor, terbuthylazine, terbutryn, diuron and mecoprop. Five additional sampled catchments across Switzerland were used to directly compare the predicted to the measured concentrations. Analysis of the first results reveals a reasonable simulation of the concentration dynamics for specific rainfall events and across the seasons. Predicted concentration ranges are reasonable even without site-specific calibration. This indicates the transferability of the calibrated model directly to other areas. However, the results also demonstrate systematic biases in that the highest measured peaks were not attained by the model. Probable causes for these deviations are conceptual model limitations and input uncertainty (pesticide use intensity, local precipitation, etc.). Accordingly, the model will be conceptually improved. This presentation will present the model simulations and compare the performance of the original and the modified model versions. Finally, the model will be applied across approximately 50% of the catchments in the Swiss Plateau, where necessary input data is available and where the model concept can be reasonably applied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050182919','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050182919"><span>Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Girimaji, Sharath S.; Abdol-Hamid, Khaled S.</p> <p>2005-01-01</p> <p>Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5358896','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5358896"><span>Climate variability, animal reservoir and transmission of scrub typhus in Southern China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Xiaoning; Ma, Yu; Tao, Xia; Wu, Xinwei</p> <p>2017-01-01</p> <p>Objectives We aimed to evaluate the relationships between climate variability, animal reservoirs and scrub typhus incidence in Southern China. Methods We obtained data on scrub typhus cases in Guangzhou every month from 2006 to 2014 from the Chinese communicable disease network. Time-series Poisson regression models and distributed lag nonlinear models (DLNM) were used to evaluate the relationship between risk factors and scrub typhus. Results Wavelet analysis found the incidence of scrub typhus cycled with a period of approximately 8–12 months and long-term trends with a period of approximately 24–36 months. The DLNM model shows that relative humidity, rainfall, DTR, MEI and rodent density were associated with the incidence of scrub typhus. Conclusions Our findings suggest that the incidence scrub typhus has two main temporal cycles. Determining the reason for this trend and how it can be used for disease control and prevention requires additional research. The transmission of scrub typhus is highly dependent on climate factors and rodent density, both of which should be considered in prevention and control strategies for scrub typhus. PMID:28273079</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28273079','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28273079"><span>Climate variability, animal reservoir and transmission of scrub typhus in Southern China.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wei, Yuehong; Huang, Yong; Li, Xiaoning; Ma, Yu; Tao, Xia; Wu, Xinwei; Yang, Zhicong</p> <p>2017-03-01</p> <p>We aimed to evaluate the relationships between climate variability, animal reservoirs and scrub typhus incidence in Southern China. We obtained data on scrub typhus cases in Guangzhou every month from 2006 to 2014 from the Chinese communicable disease network. Time-series Poisson regression models and distributed lag nonlinear models (DLNM) were used to evaluate the relationship between risk factors and scrub typhus. Wavelet analysis found the incidence of scrub typhus cycled with a period of approximately 8-12 months and long-term trends with a period of approximately 24-36 months. The DLNM model shows that relative humidity, rainfall, DTR, MEI and rodent density were associated with the incidence of scrub typhus. Our findings suggest that the incidence scrub typhus has two main temporal cycles. Determining the reason for this trend and how it can be used for disease control and prevention requires additional research. The transmission of scrub typhus is highly dependent on climate factors and rodent density, both of which should be considered in prevention and control strategies for scrub typhus.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.783a2012B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.783a2012B"><span>Leak Isolation in Pressurized Pipelines using an Interpolation Function to approximate the Fitting Losses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.</p> <p>2017-01-01</p> <p>The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080013199&hterms=reasoning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dreasoning','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080013199&hterms=reasoning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dreasoning"><span>Probabilistic Reasoning for Plan Robustness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.</p> <p>2005-01-01</p> <p>A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT........90J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT........90J"><span>Development of a mathematical model of the human cardiovascular system: An educational perspective</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Johnson, Bruce Allen</p> <p></p> <p>A mathematical model of the human cardiovascular system will be a useful educational tool in biological sciences and bioengineering classrooms. The goal of this project is to develop a mathematical model of the human cardiovascular system that responds appropriately to variations of significant physical variables. Model development is based on standard fluid statics and dynamics principles, pressure-volume characteristics of the cardiac cycle, and compliant behavior of blood vessels. Cardiac cycle phases provide the physical and logical model structure, and Boolean algebra links model sections. The model is implemented using VisSim, a highly intuitive and easily learned block diagram modeling software package. Comparisons of model predictions of key variables to published values suggest that the model reasonably approximates expected behavior of those variables. The model responds plausibly to variations of independent variables. Projected usefulness of the model as an educational tool is threefold: independent variables which determine heart function may be easily varied to observe cause and effect; the model is used in an interactive setting; and the relationship of governing equations to model behavior is readily viewable and intuitive. Future use of this model in classrooms may give a more reasonable indication of its value as an educational tool.* *This dissertation includes a CD that is multimedia (contains text and other applications that are not available in a printed format). The CD requires the following applications: CorelPhotoHouse, CorelWordPerfect, VisSinViewer (included on CD), Internet access.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhRvE..83d6128I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhRvE..83d6128I"><span>Spread of information and infection on finite random networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar</p> <p>2011-04-01</p> <p>The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA221945','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA221945"><span>Development of Probabilistic and Possebilistic Approaches to Approximate Reasoning and Its Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1989-10-31</p> <p>fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JChPh.132b4505C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JChPh.132b4505C"><span>Two-dimensional electronic spectra from the hierarchical equations of motion method: Application to model dimers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Liping; Zheng, Renhui; Shi, Qiang; Yan, YiJing</p> <p>2010-01-01</p> <p>We extend our previous study of absorption line shapes of molecular aggregates using the Liouville space hierarchical equations of motion (HEOM) method [L. P. Chen, R. H. Zheng, Q. Shi, and Y. J. Yan, J. Chem. Phys. 131, 094502 (2009)] to calculate third order optical response functions and two-dimensional electronic spectra of model dimers. As in our previous work, we have focused on the applicability of several approximate methods related to the HEOM method. We show that while the second order perturbative quantum master equations are generally inaccurate in describing the peak shapes and solvation dynamics, they can give reasonable peak amplitude evolution even in the intermediate coupling regime. The stochastic Liouville equation results in good peak shapes, but does not properly describe the excited state dynamics due to the lack of detailed balance. A modified version of the high temperature approximation to the HEOM gives the best agreement with the exact result.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21230032','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21230032"><span>Observation uncertainty in reversible Markov chains.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Metzner, Philipp; Weber, Marcus; Schütte, Christof</p> <p>2010-09-01</p> <p>In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950048231&hterms=baryonic+matter&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbaryonic%2Bmatter','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950048231&hterms=baryonic+matter&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbaryonic%2Bmatter"><span>Massive black holes and light-element nucleosynthesis in a baryonic universe</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gnedin, Nickolay Y.; Ostriker, Jeremiah P.; Rees, Martin J.</p> <p>1995-01-01</p> <p>We reexamine the model proposed by Gnedin & Ostriker (1992) in which Jeans mass black holes (M(sub BH) approximately = 10(exp 6) solar mass) form shortly after decoupling. There is no nonbaryonic dark matter in this model, but we examine the possibility that Omega(sub b) is considerably larger than given by normal nucleosynthesis. Here we allow for the fact that much of the high baryon-to-photon ratio material will collapse leaving the universe of remaining material with light-element abundances more in accord with the residual baryonic density (approximately = 10(exp -2)) than with Omega(sub 0) and the initial baryonic density (approximately = 10(exp -1)). We find that no reasonable model can be made with random-phase density fluctuations, if the power on scales smaller than 10(exp 6) solar mass is as large as expected. However, phase-correlated models of the type that might occur in connection with topological singularities can be made with Omega(sub b) h(exp 2) = 0.013 +/- 0.001, 0.15 approximately less than Omega(sub 0) approximately less than 0.4, which are either flat (Omega(sub lambda) = 1 - Omega(sub 0)) or open (Omega(sub lambda) = 0) and which satisfy all the observational constraints which we apply, including the large baryon-to-total mass ratio found in the X-ray clusters. The remnant baryon density is thus close to that obtained in the standard picture (Omega(sub b) h(exp 2) = 0.0125 +/- 0.0025; Walker et al. 1991). The spectral index implied for fluctuations in the baryonic isocurvature scenario, -1 less than m less than 0, is in the range expected by other arguments based on large-scale structure and microwave fluctuation constraints. The dark matter in this picture is in the form of massive black holes. Accretion onto them at early epochs releases high-energy photons which significantly heat and reionize the universe. But photodissociation does not materially change light-element abundances. A typical model gives bar-y approximately = 1 x 10(exp -5), n(sub e)/n(sub H)(z = 30) approximately = 0.1, and a diffuse gamma-ray background at 100 keV near the Cosmic Background Explorer Satellite (COBE) limit of the order of 10% of that observed which originates from high-redshift quasars. Reionization in this model occurs at redshift 600 and reaches (H II/H(sub tot) approximately = 0.1-0.2.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930022943','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930022943"><span>Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hoebel, Louis J.</p> <p>1993-01-01</p> <p>The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...855...64L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...855...64L"><span>The Discrepancy between Einstein Mass and Dynamical Mass for SIS and Power-law Mass Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Rui; Wang, Jiancheng; Shu, Yiping; Xu, Zhaoyi</p> <p>2018-03-01</p> <p>We investigate the discrepancy between the two-dimensional projected lensing mass and the dynamical mass for an ensemble of 97 strong gravitational lensing systems discovered by the Sloan Lens ACS Survey, the BOSS Emission-Line Lens Survey (BELLS), and the BELLS for GALaxy-Lyα EmitteR sYstems Survey. We fit the lensing data to obtain the Einstein mass and use the velocity dispersion of the lensing galaxies provided by the Sloan Digital Sky Survey to get the projected dynamical mass within the Einstein radius by assuming the power-law mass approximation. The discrepancy is found to be obvious and quantified by Bayesian analysis. For the singular isothermal sphere mass model, we obtain that the Einstein mass is 20.7% more than the dynamical mass, and the discrepancy increases with the redshift of the lensing galaxies. For the more general power-law mass model, the discrepancy still exists within a 1σ credible region. We suspect the main reason for this discrepancy is mass contamination, including all invisible masses along the line of sight. In addition, the measurement errors and the approximation of the mass models could also contribute to the discrepancy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950034518&hterms=local+linear&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dlocal%2Blinear','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950034518&hterms=local+linear&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dlocal%2Blinear"><span>The Local Supercluster as a test of cosmological models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cen, Renyue</p> <p>1994-01-01</p> <p>The Local Supercluster kinematic properties (the Local Group infall toward the Virgo Cluster and galaxy density distribution about the Virgo Cluster) in various cosmological models are examined utilizing large-scale N-body (PM) simulations 500(exp 3) cells, 250(exp 3) particles, and box size of 400 h(exp -1) Mpc) and are compared to observations. Five models are investigated: (1) the standard, Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with omega = 1, h = 0.5, and sigma(sub 8) = 1.05; (2) the standard Hot Dark Matter (HDM) model with omega = 1, h = 0.75, and sigma(sub 8) = 1; (3) the tilted CDM model with omega = 1, h = 0.5, n = 0.7, and sigma(sub 8) = 0.5; (4) a CDM + lambda model with omega = 0.3, lambda = 0.7, h = 2/3, and sigma(sub 8) = 2/3; (5) the PBI model with omega = 0.2, h = 0.8, x = 0.1, m = -0.5, and sigma(sub 8) = 0.9. Comparison of the five models with the presently available observational measurements v(sub LG) = 85 - 305 km/s (with mean of 250 km/s), delta(n(sub g))/(n(sub g)-bar) = 1.40 + or - 0.35) suggests that an open universe with omega approximately 0.5 (with or without lambda) and sigma(sub 8) approximately 0.8 is preferred, with omega = 0.3-1.0 (with or without lambda) and sigma(sub 8) = 0.7-1.0 being the acceptable range. At variance with some previous claims based on either direct N-body or spherical nonlinear approaches, we find that a flat model with sigma(sub 8) approximately 0.7-1.0 seems to be reasonably consistent with observations. However, if one favors the low limit of v(sub LG) = 85 km/s, then an omega approximately 0.2-0.3 universe seems to provide a better fit, and flat (omega = 1) models are ruled out at approximately 95% confidence level. On the other hand, if the high limit of v(sub LG) = 350 km/s is closer to the truth, then it appears that omega approximately 0.7-0.8 is more consistent. This test is insensitive to the shape of the power spectrum, but rather sensitive to the normalization of the perturbation amplitude on the relevant scale (e.g., sigma(sub 8)) and omega. We find that neither linear nor nonlinear relations (with spherical symmetry) are good approximations for the relation between radial peculiar velocity and density perturbation, i.e., nonspherical effects and gravitational tidal field are important. The derived omega using either of the two relations is underestimated. In some cases, this error is as large as a factor of 2-4.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016LTP....42...85C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016LTP....42...85C"><span>Heat capacity of xenon adsorbed on nanobundle grooves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chishko, K. A.; Sokolova, E. S.</p> <p>2016-02-01</p> <p>A model of a one-dimensional nonideal gas in an external transverse force field is used to interpret the experimentally observed thermodynamic properties of xenon deposited in grooves on the surface of carbon nanobundles. A nonideal gas model with pairwise interactions is not entirely adequate for describing dense adsorbates (at low temperatures), but makes it easy to account for the exchange of particles between the 1D adsorbate and the 3D atmosphere, which is an important factor at intermediate (on the order of 35 K for xenon) and, especially, high (˜100 K) temperatures. In this paper, we examine a 1D real gas taking only the one-dimensional Lennard-Jones interaction into account, but under exact equilibrium with respect to the number of particles between the 1D adsorbate and the 3D atmosphere of the measurement cell. The low-temperature branch of the specific heat is fitted independently by an elastic chain model so as to obtain the best agreement between theory and experiment over the widest possible region, beginning at zero temperature. The gas approximation sets in after temperatures for which the phonon specific heat of the chain essentially transforms to a one-dimensional equipartition law. Here the basic parameters of both models can be chosen so that the heat capacity C(T) of the chain transforms essentially continuously into the corresponding curve for the gas approximation. Thus, it can be expected that an adequate interpretation of the real temperature dependences of the specific heat of low-dimensionality atomic adsorbates can be obtained through a reasonable combination of the phonon and gas approximations. The main parameters of the gas approximation (such as the desorption energy) obtained by fitting the theory to experiments on the specific heat of xenon correlate well with published data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890015825','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890015825"><span>An approximation function for frequency constrained structural optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Canfield, R. A.</p> <p>1989-01-01</p> <p>The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JPhD...46S5202M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JPhD...46S5202M"><span>Development of a positive corona from a long grounded wire in a growing thunderstorm field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mokrov, M. S.; Raizer, Yu P.; Bazelyan, E. M.</p> <p>2013-11-01</p> <p>The properties of a non-stationary corona initiated from a long grounded wire suspended horizontally above the ground and coronating in a slowly varying thundercloud electric field are studied. A two-dimensional (2D) model of the corona is developed. On the basis of this model, characteristics of the corona produced by a lightning protection wire are calculated under thunderstorm conditions. The corona characteristics are also found by using approximate analytical and quasi-one-dimensional numerical models. The results of these models agree reasonably well with those obtained from the 2D simulation. This allows one to estimate the corona parameters without recourse to the cumbersome simulation. This work was performed with a view to study the efficiency of lightning protection wires later on.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970014674','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970014674"><span>Evaluation of a vortex-based subgrid stress model using DNS databases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Misra, Ashish; Lund, Thomas S.</p> <p>1996-01-01</p> <p>The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12487999','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12487999"><span>Hydration entropy change from the hard sphere model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Graziano, Giuseppe; Lee, Byungkook</p> <p>2002-12-10</p> <p>The gas to liquid transfer entropy change for a pure non-polar liquid can be calculated quite accurately using a hard sphere model that obeys the Carnahan-Starling equation of state. The same procedure fails to produce a reasonable value for hydrogen bonding liquids such as water, methanol and ethanol. However, the size of the molecules increases when the hydrogen bonds are turned off to produce the hard sphere system and the volume packing density rises. We show here that the hard sphere system that has this increased packing density reproduces the experimental transfer entropy values rather well. The gas to water transfer entropy values for small non-polar hydrocarbons is also not reproduced by a hard sphere model, whether one uses the normal (2.8 A diameter) or the increased (3.2 A) size for water. At least part of the reason that the hard sphere model with 2.8 A size water produces too small entropy change is that the size of water is too small for a system without hydrogen bonds. The reason that the 3.2 A model also produces too small entropy values is that this is an overly crowded system and that the free volume introduced in the system by the addition of a solute molecule produces too much of a relief to this crowding. A hard sphere model, in which the free volume increase is limited by requiring that the average surface-to-surface distance between the solute and water molecules is the same as that between the increased-size water molecules, does approximately reproduce the experimental hydration entropy values. Copyright 2002 Elsevier Science B.V.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5875993','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5875993"><span>Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ando, David; Singh, Jahnavi; Keasling, Jay D.; García Martín, Héctor</p> <p>2018-01-01</p> <p>Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13C Metabolic Flux Analysis (13C MFA) and Two-Scale 13C Metabolic Flux Analysis (2S-13C MFA) are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1) systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2) automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13C MFA or 2S-13C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore. PMID:29300340</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.474.1886S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.474.1886S"><span>Polarization light curve modelling of corotating interaction regions in the wind of the Wolf-Rayet star WR 6</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>St-Louis, N.; Tremblay, Patrick; Ignace, Richard</p> <p>2018-02-01</p> <p>The intriguing WN4b star WR 6 has been known to display epoch-dependent spectroscopic, photometric and polarimetric variability for several decades. In this paper, we set out to verify if a simplified analytical model in which corotating interaction regions (CIRs) threading an otherwise spherical wind is able to reproduce the many broad-band continuum light curves from the literature with a reasonable set of parameters. We modified the optically thin model developed by Ignace, St-Louis & Proulx-Giraldeau to approximately account for multiple scattering and used it to fit 13 separate data sets of this star. By including two CIRs in the wind, we obtained reasonable fits for all data sets with coherent values for the inclination of the rotation axis (i0 = 166°) and for its orientation in the plane of the sky, although in the latter case we obtained two equally acceptable values (ψ = 63° and 152°) from the polarimetry. Additional line profile variation simulations using the Sobolev approximation for the line transfer allowed us to eliminate the ψ = 152° solution. With the adopted configuration (i0 = 166° and ψ = 63°), we were able to reproduce all data sets relatively well with two CIRs located near the stellar equator and always separated by ˜90° in longitude. The epoch dependence comes from the fact that these CIRs migrate along the surface of the star. Density contrasts smaller than a factor of 2 and large opening angles for the CIR (β ⪆ 35°) were found to best reproduce the type of spectroscopic variability reported in the literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870016170','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870016170"><span>Heat generation in Aircraft tires under yawed rolling conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dodge, Richard N.; Clark, Samuel K.</p> <p>1987-01-01</p> <p>An analytical model was developed for approximating the internal temperature distribution in an aircraft tire operating under conditions of yawed rolling. The model employs an assembly of elements to represent the tire cross section and treats the heat generated within the tire as a function of the change in strain energy associated with predicted tire flexure. Special contact scrubbing terms are superimposed on the symmetrical free rolling model to account for the slip during yawed rolling. An extensive experimental program was conducted to verify temperatures predicted from the analytical model. Data from this program were compared with calculation over a range of operating conditions, namely, vertical deflection, inflation pressure, yaw angle, and direction of yaw. Generally the analytical model predicted overall trends well and correlated reasonably well with individual measurements at locations throughout the cross section.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5521849','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5521849"><span>Inherent limitations of probabilistic models for protein-DNA binding specificity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ruan, Shuxiang</p> <p>2017-01-01</p> <p>The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850022685','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850022685"><span>Magnetic probing of the solar interior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Benton, E. R.; Estes, R. H.</p> <p>1985-01-01</p> <p>The magnetic field patterns in the region beneath the solar photosphere is determined. An approximate method for downward extrapolation of line of sight magnetic field measurements taken at the solar photosphere was developed. It utilizes the mean field theory of electromagnetism in a form thought to be appropriate for the solar convection zone. A way to test that theory is proposed. The straightforward application of the lowest order theory with the complete model fit to these data does not indicate the existence of any reasonable depth at which flux conservation is achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..APRH12003O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..APRH12003O"><span>Computing the universe: how large-scale simulations illuminate galaxies and dark energy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>O'Shea, Brian</p> <p>2015-04-01</p> <p>High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040085986','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040085986"><span>Risk-Based Prioritization of Research for Aviation Security Using Logic-Evolved Decision Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Eisenhawer, S. W.; Bott, T. F.; Sorokach, M. R.; Jones, F. P.; Foggia, J. R.</p> <p>2004-01-01</p> <p>The National Aeronautics and Space Administration is developing advanced technologies to reduce terrorist risk for the air transportation system. Decision support tools are needed to help allocate assets to the most promising research. An approach to rank ordering technologies (using logic-evolved decision analysis), with risk reduction as the metric, is presented. The development of a spanning set of scenarios using a logic-gate tree is described. Baseline risk for these scenarios is evaluated with an approximate reasoning model. Illustrative risk and risk reduction results are presented.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvB..97a4520G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvB..97a4520G"><span>Proximity effect in superconducting-ferromagnetic granular structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Greener, Hadar; Shelukhin, Victor; Karpovski, Michael; Goldstein, Moshe; Palevski, Alexander</p> <p>2018-01-01</p> <p>We examined the proximity effect in granular films made of Pb, a superconductor, and Ni, a ferromagnet, with various compositions. Slow decay of the critical temperature as a function of the relative volume concentration of Ni per sample was demonstrated by our measurements, followed by a saturation of Tc. Using an approximate theoretical description of our granular system in terms of a layered one, we show that our data can only be reasonably fitted by a trilayer model. This indicates the importance of the interplay between different ferromagnetic grains, which should lead to triplet Cooper pairing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940023763','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940023763"><span>Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jacquotte, Olivier P.; Oden, J. Tinsley</p> <p>1994-01-01</p> <p>Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28258813','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28258813"><span>An Approach for a Mathematical Description of Human Root Canals by Means of Elementary Parameters.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dannemann, Martin; Kucher, Michael; Kirsch, Jasmin; Binkowski, Alexander; Modler, Niels; Hannig, Christian; Weber, Marie-Theres</p> <p>2017-04-01</p> <p>Root canal geometry is an important factor for instrumentation and preparation of the canals. Curvature, length, shape, and ramifications need to be evaluated in advance to enhance the success of the treatment. Therefore, the present study aimed to design and realize a method for analyzing the geometric characteristics of human root canals. Two extracted human lower molars were radiographed in the occlusal direction using micro-computed tomographic imaging. The 3-dimensional geometry of the root canals, calculated by a self-implemented image evaluation algorithm, was described by 3 different mathematical models: the elliptical model, the 1-circle model, and the 3-circle model. The different applied mathematical models obtained similar geometric properties depending on the parametric model used. Considering more complex root canals, the differences of the results increase because of the different adaptability and the better approximation of the geometry. With the presented approach, it is possible to estimate and compare the geometry of natural root canals. Therefore, the deviation of the canal can be assessed, which is important for the choice of taper of root canal instruments. Root canals with a nearly elliptical cross section are reasonably approximated by the elliptical model, whereas the 3-circle model obtains a good agreement for curved shapes. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1426749-accurate-efficient-laser-envelope-solver-modeling-laser-plasma-accelerators','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1426749-accurate-efficient-laser-envelope-solver-modeling-laser-plasma-accelerators"><span>An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...</p> <p>2017-10-17</p> <p>Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1426749-accurate-efficient-laser-envelope-solver-modeling-laser-plasma-accelerators','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1426749-accurate-efficient-laser-envelope-solver-modeling-laser-plasma-accelerators"><span>An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.</p> <p></p> <p>Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PPCF...60a4002B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PPCF...60a4002B"><span>An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.</p> <p>2018-01-01</p> <p>Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298800','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298800"><span>Fluid reasoning predicts future mathematics among children and adolescents</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Green, Chloe T.; Bunge, Silvia A.; Chiongbian, Victoria Briones; Barrow, Maia; Ferrer, Emilio</p> <p>2017-01-01</p> <p>The aim of this longitudinal study was to determine whether fluid reasoning (FR) plays a significant role in the acquisition of mathematics skills, above and beyond the effects of other cognitive and numerical abilities. Using a longitudinal cohort sequential design, we examined how FR measured at three assessment occasions, spaced approximately 1.5 years apart, predicted math outcomes for a group of 69 participants between ages 6 and 21 across all three assessment occasions. We used structural equation modeling (SEM) to examine the direct and indirect relations between children's prior cognitive abilities and their future math achievement. A model including age, FR, vocabulary, and spatial skills accounted for 90% of the variance in future math achievement. In this model, FR was the only significant predictor of future math achievement; neither age, vocabulary, nor spatial skills were significant predictors. Thus, FR was the only predictor of future math achievement across a wide age range that spanned primary and secondary school. These findings build on Cattell's conceptualization of FR (Cattell, 1987) as a scaffold for learning, showing that this domain-general ability supports the acquisition of rudimentary math skills as well as the ability to solve more complex mathematical problems. PMID:28152390</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28217869','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28217869"><span>Reasons for Not Participating in Scleroderma Patient Support Groups: A Cross-Sectional Study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gumuchian, Stephanie T; Delisle, Vanessa C; Peláez, Sandra; Malcarne, Vanessa L; El-Baalbaki, Ghassan; Kwakkenbos, Linda; Jewett, Lisa R; Carrier, Marie-Eve; Pépin, Mia; Thombs, Brett D</p> <p>2018-02-01</p> <p>Peer-led support groups are an important resource for many people with scleroderma (systemic sclerosis; SSc). Little is known, however, about barriers to participation. The objective of this study was to identify reasons why some people with SSc do not participate in SSc support groups. A 21-item survey was used to assess reasons for nonattendance among SSc patients in Canada and the US. Exploratory factor analysis (EFA) was conducted, using the software MPlus 7, to group reasons for nonattendance into themes. A total of 242 people (202 women) with SSc completed the survey. EFA results indicated that a 3-factor model best described the data (χ 2 [150] = 302.7; P < 0.001; Comparative Fit Index = 0.91, Tucker-Lewis Index = 0.88, root mean square error of approximation = 0.07, factor intercorrelations 0.02-0.43). The 3 identified themes, reflecting reasons for not attending SSc support groups were personal reasons (9 items; e.g., already having enough support), practical reasons (7 items; e.g., no local support groups available), and beliefs about support groups (5 items; e.g., support groups are too negative). On average, respondents rated 4.9 items as important or very important reasons for nonattendance. The 2 items most commonly rated as important or very important were 1) already having enough support from family, friends, or others, and 2) not knowing of any SSc support groups offered in my area. SSc organizations may be able to address limitations in accessibility and concerns about SSc support groups by implementing online support groups, better informing patients about support group activities, and training support group facilitators. © 2017, American College of Rheumatology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21499526-shear-bulk-viscosities-pure-glue-matter','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21499526-shear-bulk-viscosities-pure-glue-matter"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Khvorostukhin, A. S.; Joint Institute for Nuclear Research, 141980 Dubna; Institute of Applied Physics, Moldova Academy of Science, MD-2028 Kishineu</p> <p></p> <p>Shear {eta} and bulk {zeta} viscosities are calculated in a quasiparticle model within a relaxation-time approximation for pure gluon matter. Below T{sub c}, the confined sector is described within a quasiparticle glueball model. The constructed equation of state reproduces the first-order phase transition for the glue matter. It is shown that with this equation of state, it is possible to describe the temperature dependence of the shear viscosity to entropy ratio {eta}/s and the bulk viscosity to entropy ratio {zeta}/s in reasonable agreement with available lattice data, but absolute values of the {zeta}/s ratio underestimate the upper limits of thismore » ratio in the lattice measurements typically by an order of magnitude.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70018649','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70018649"><span>Observations and analysis of self-similar branching topology in glacier networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Bahr, D.B.; Peckham, S.D.</p> <p>1996-01-01</p> <p>Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10411E..0XO','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10411E..0XO"><span>Oxygenation level and hemoglobin concentration in experimental tumor estimated by diffuse optical spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Orlova, A. G.; Kirillin, M. Yu.; Volovetsky, A. B.; Shilyagina, N. Yu.; Sergeeva, E. A.; Golubiatnikov, G. Yu.; Turchin, I. V.</p> <p>2017-07-01</p> <p>Using diffuse optical spectroscopy the level of oxygenation and hemoglobin concentration in experimental tumor in comparison with normal muscle tissue of mice have been studied. Subcutaneously growing SKBR-3 was used as a tumor model. Continuous wave fiber probe diffuse optical spectroscopy system was employed. Optical properties extraction approach was based on diffusion approximation. Decreased blood oxygen saturation level and increased total hemoglobin content were demonstrated in the neoplasm. The main reason of such differences between tumor and norm was significant elevation of deoxyhemoglobin concentration in SKBR-3. The method can be useful for diagnosis of tumors as well as for study of blood flow parameters of tumor models with different angiogenic properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20075726-el-nino-southern-oscillation-second-hadley-centre-coupled-model-its-response-greenhouse-warming','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20075726-el-nino-southern-oscillation-second-hadley-centre-coupled-model-its-response-greenhouse-warming"><span>The El Nino-Southern Oscillation in the second Hadley Centre coupled model and its response to greenhouse warming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Collins, M.</p> <p></p> <p>This paper describes El Nino-Southern Oscillation (ENSO) interannual variability simulated in the second Handley Centre coupled model under control and greenhouse warming scenarios. The model produces a very reasonable simulation of ENSO in the control experiment--reproducing the amplitude, spectral characteristics, and phase locking to the annual cycle that are observed in nature. The mechanism for the model ENSO is shown to be a mixed SST-ocean dynamics mode that can be interpreted in terms of the ocean recharge paradigm of Jin. In experiments with increased levels of greenhouse gases, no statistically significant changes in ENSO are seen until these levels approachmore » four times preindustrial values. In these experiments, the model ENSO has an approximately 20% larger amplitude, a frequency that is approximately double that of the current ENSO (implying more frequent El Ninos and La Ninas), and phase locks to the annual cycle at a different time of year. It is shown that the increase in the vertical gradient of temperature in the thermocline region, associated with the model's response to increased greenhouse gases, is responsible for the increase in the amplitude of ENSO, while the increase in meridional temperature gradients on either side of the equator, again associated with the models response to increasing greenhouse gases, is responsible for the increased frequency of ENSO events.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=applied+AND+mathematics+AND+computation&pg=4&id=EJ758560','ERIC'); return false;" href="https://eric.ed.gov/?q=applied+AND+mathematics+AND+computation&pg=4&id=EJ758560"><span>Polynomial Approximation of Functions: Historical Perspective and New Tools</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kidron, Ivy</p> <p>2003-01-01</p> <p>This paper examines the effect of applying symbolic computation and graphics to enhance students' ability to move from a visual interpretation of mathematical concepts to formal reasoning. The mathematics topics involved, Approximation and Interpolation, were taught according to their historical development, and the students tried to follow the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4736732','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4736732"><span>Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur</p> <p>2015-01-01</p> <p>The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000082012','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000082012"><span>Metrics for Labeled Markov Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash</p> <p>1999-01-01</p> <p>Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27863879','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27863879"><span>Settling velocity of microplastic particles of regular shapes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khatmullina, Liliya; Isachenko, Igor</p> <p>2017-01-30</p> <p>Terminal settling velocity of around 600 microplastic particles, ranging from 0.5 to 5mm, of three regular shapes was measured in a series of sink experiments: Polycaprolactone (material density 1131kgm -3 ) spheres and short cylinders with equal dimensions, and long cylinders cut from fishing lines (1130-1168kgm -3 ) of different diameters (0.15-0.71mm). Settling velocities ranging from 5 to 127mms -1 were compared with several semi-empirical predictions developed for natural sediments showing reasonable consistency with observations except for the case of long cylinders, for which the new approximation is proposed. The effect of particle's shape on its settling velocity is highlighted, indicating the need of further experiments with real marine microplastics of different shapes and the necessity of the development of reasonable parameterization of microplastics settling for proper modeling of their transport in the water column. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870038987&hterms=higgs+cosmology&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhiggs%2Bcosmology','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870038987&hterms=higgs+cosmology&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhiggs%2Bcosmology"><span>The behavior of the Higgs field in the new inflationary universe</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Guth, Alan H.; Pi, So-Young</p> <p>1986-01-01</p> <p>Answers are provided to questions about the standard model of the new inflationary universe (NIU) which have raised concerns about the model's validity. A baby toy problem which consists of the study of a single particle moving in one dimension under the influence of a potential with the form of an upside-down harmonic oscillator is studied, showing that the quantum mechanical wave function at large times is accurately described by classical physics. Then, an exactly soluble toy model for the behavior of the Higgs field in the NIU is described which should provide a reasonable approximation to the behavior of the Higgs field in the NIU. The dynamics of the toy model is described, and calculative results are reviewed which, the authors claim, provide strong evidence that the basic features of the standard picture are correct.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...856..159E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...856..159E"><span>A Tractable Estimate for the Dissipation Range Onset Wavenumber Throughout the Heliosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Engelbrecht, N. Eugene; Strauss, R. Du Toit</p> <p>2018-04-01</p> <p>The modulation of low-energy electrons in the heliosphere is extremely sensitive to the behavior of the dissipation range slab turbulence. The present study derives approximate expressions for the wavenumber at which the dissipation range on the slab turbulence power spectrum commences, by assuming that this onset occurs when dispersive waves propagating parallel to the background magnetic field gyroresonate with thermal plasma particles. This assumption yields results in reasonable agreement with existing spacecraft observations. These expressions are functions of the solar wind proton and electron temperatures, which are here modeled throughout the region where the solar wind is supersonic using a two-component turbulence transport model. The results so acquired are compared with extrapolations of existing models for the dissipation range onset wavenumber, and conclusions are drawn therefrom.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..485...61S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..485...61S"><span>Low-traffic limit and first-passage times for a simple model of the continuous double auction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scalas, Enrico; Rapallo, Fabio; Radivojević, Tijana</p> <p>2017-11-01</p> <p>We consider a simplified model of the continuous double auction where prices are integers varying from 1 to N with limit orders and market orders, but quantity per order limited to a single share. For this model, the order process is equivalent to two M / M / 1 queues. We study the behavior of the auction in the low-traffic limit where limit orders are immediately matched by market orders. In this limit, the distribution of prices can be computed exactly and gives a reasonable approximation of the price distribution when the ratio between the rate of order arrivals and the rate of order executions is below 1 / 2. This is further confirmed by the analysis of the first-passage time in 1 or N.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10706812','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10706812"><span>Working Together but in Opposition: An Examination of the "Good-Cop/Bad-Cop" Negotiating Team Tactic.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brodt; Tuchinsky</p> <p>2000-03-01</p> <p>Unlike solo negotiators, members of negotiating teams may for strategic reasons choose to play different roles; the familiar "good cop/bad cop" distributive bargaining tactic is one example of role differentiation designed to enhance a team's success at the bargaining table. In two empirical studies about a hypothetical three-person work group, we examined the cognitive processes underlying this tactic using a social-cognitive decision model (Brodt & Duncan, 1998) that conceptualizes the negotiators' decision tasks and persuasion processes. Results generally supported the model except for an intriguing asymmetry depending on a person's initial inclination (accepting, rejecting). This research extends findings on the tactic and on contrast effects (Cialdini, 1984) and supports the model's usefulness as an approximate representation of negotiator cognition. Copyright 2000 Academic Press.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840010134','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840010134"><span>Secondary flow spanwise deviation model for the stators of NASA middle compressor stages</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Roberts, W. B.; Sandercock, D. M.</p> <p>1984-01-01</p> <p>A model of the spanwise variation of deviation for stator blades is presented. Deviation is defined as the difference between the passage mean flow angle and the metal angle at the outlet of a blade element of an axial compressor stage. The variation of deviation is taken as the difference above or below that predicted by blade element, (i.e., two-dimensional) theory at any spanwise location. The variation of deviation is dependent upon the blade camber, solidity and inlet boundary layer thickness at the hub or tip end-wall, and the blade channel aspect ratio. If these parameters are known or can be calculated, the model provides a reasonable approximation of the spanwise variation of deviation for most compressor middle stage stators operating at subsonic inlet Mach numbers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6859E..1HG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6859E..1HG"><span>Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan</p> <p>2008-02-01</p> <p>One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880040181&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dberenji','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880040181&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dberenji"><span>Application of plausible reasoning to AI-based control systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berenji, Hamid; Lum, Henry, Jr.</p> <p>1987-01-01</p> <p>Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4053422','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4053422"><span>Future Declines of Coronary Heart Disease Mortality in England and Wales Could Counter the Burden of Population Ageing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guzman Castillo, Maria; Gillespie, Duncan O. S.; Allen, Kirk; Bandosz, Piotr; Schmid, Volker; Capewell, Simon; O’Flaherty, Martin</p> <p>2014-01-01</p> <p>Background Coronary Heart Disease (CHD) remains a major cause of mortality in the United Kingdom. Yet predictions of future CHD mortality are potentially problematic due to population ageing and increase in obesity and diabetes. Here we explore future projections of CHD mortality in England & Wales under two contrasting future trend assumptions. Methods In scenario A, we used the conventional counterfactual scenario that the last-observed CHD mortality rates from 2011 would persist unchanged to 2030. The future number of deaths was calculated by applying those rates to the 2012–2030 population estimates. In scenario B, we assumed that the recent falling trend in CHD mortality rates would continue. Using Lee-Carter and Bayesian Age Period Cohort (BAPC) models, we projected the linear trends up to 2030. We validate our methods using past data to predict mortality from 2002–2011. Then, we computed the error between observed and projected values. Results In scenario A, assuming that 2011 mortality rates stayed constant by 2030, the number of CHD deaths would increase 62% or approximately 39,600 additional deaths. In scenario B, assuming recent declines continued, the BAPC model (the model with lowest error) suggests the number of deaths will decrease by 56%, representing approximately 36,200 fewer deaths by 2030. Conclusions The decline in CHD mortality has been reasonably continuous since 1979, and there is little reason to believe it will soon halt. The commonly used assumption that mortality will remain constant from 2011 therefore appears slightly dubious. By contrast, using the BAPC model and assuming continuing mortality falls offers a more plausible prediction of future trends. Thus, despite population ageing, the number of CHD deaths might halve again between 2011 and 2030. This has implications for how the potential benefits of future cardiovascular strategies might best be calculated and presented. PMID:24918442</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...853...66N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...853...66N"><span>Dressing the Coronal Magnetic Extrapolations of Active Regions with a Parameterized Thermal Structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nita, Gelu M.; Viall, Nicholeen M.; Klimchuk, James A.; Loukitcheva, Maria A.; Gary, Dale E.; Kuznetsov, Alexey A.; Fleishman, Gregory D.</p> <p>2018-01-01</p> <p>The study of time-dependent solar active region (AR) morphology and its relation to eruptive events requires analysis of imaging data obtained in multiple wavelength domains with differing spatial and time resolution, ideally in combination with 3D physical models. To facilitate this goal, we have undertaken a major enhancement of our IDL-based simulation tool, GX_Simulator, previously developed for modeling microwave and X-ray emission from flaring loops, to allow it to simulate quiescent emission from solar ARs. The framework includes new tools for building the atmospheric model and enhanced routines for calculating emission that include new wavelengths. In this paper, we use our upgraded tool to model and analyze an AR and compare the synthetic emission maps with observations. We conclude that the modeled magneto-thermal structure is a reasonably good approximation of the real one.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JNuM..488..191V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JNuM..488..191V"><span>Modelling of pore coarsening in the high burn-up structure of UO2 fuel</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Veshchunov, M. S.; Tarasov, V. I.</p> <p>2017-05-01</p> <p>The model for coalescence of randomly distributed immobile pores owing to their growth and impingement, applied by the authors earlier to consideration of the porosity evolution in the high burn-up structure (HBS) at the UO2 fuel pellet periphery (rim zone), was further developed and validated. Predictions of the original model, taking into consideration only binary impingements of growing immobile pores, qualitatively correctly describe the decrease of the pore number density with the increase of the fractional porosity, however notably underestimate the coalescence rate at high burn-ups attained in the outmost region of the rim zone. In order to overcome this discrepancy, the next approximation of the model taking into consideration triple impingements of growing pores was developed. The advanced model provides a reasonable consent with experimental data, thus demonstrating the validity of the proposed pore coarsening mechanism in the HBS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HydJ...26..923H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HydJ...26..923H"><span>Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, Zeyu; Lu, Wenxi</p> <p>2018-05-01</p> <p>Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1798b0035C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1798b0035C"><span>Numerical methods on European option second order asymptotic expansions for multiscale stochastic volatility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Canhanga, Betuel; Ni, Ying; Rančić, Milica; Malyarenko, Anatoliy; Silvestrov, Sergei</p> <p>2017-01-01</p> <p>After Black-Scholes proposed a model for pricing European Options in 1973, Cox, Ross and Rubinstein in 1979, and Heston in 1993, showed that the constant volatility assumption made by Black-Scholes was one of the main reasons for the model to be unable to capture some market details. Instead of constant volatilities, they introduced stochastic volatilities to the asset dynamic modeling. In 2009, Christoffersen empirically showed "why multifactor stochastic volatility models work so well". Four years later, Chiarella and Ziveyi solved the model proposed by Christoffersen. They considered an underlying asset whose price is governed by two factor stochastic volatilities of mean reversion type. Applying Fourier transforms, Laplace transforms and the method of characteristics they presented a semi-analytical formula to compute an approximate price for American options. The huge calculation involved in the Chiarella and Ziveyi approach motivated the authors of this paper in 2014 to investigate another methodology to compute European Option prices on a Christoffersen type model. Using the first and second order asymptotic expansion method we presented a closed form solution for European option, and provided experimental and numerical studies on investigating the accuracy of the approximation formulae given by the first order asymptotic expansion. In the present paper we will perform experimental and numerical studies for the second order asymptotic expansion and compare the obtained results with results presented by Chiarella and Ziveyi.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3087400','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3087400"><span>Modeling subharmonic response from contrast microbubbles as a function of ambient static pressure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Katiyar, Amit; Sarkar, Kausik; Forsberg, Flemming</p> <p>2011-01-01</p> <p>Variation of subharmonic response from contrast microbubbles with ambient pressure is numerically investigated for non-invasive monitoring of organ-level blood pressure. Previously, several contrast microbubbles both in vitro and in vivo registered approximately linear (5–15 dB) subharmonic response reduction with 188 mm Hg change in ambient pressure. In contrast, simulated subharmonic response from a single microbubble is seen here to either increase or decrease with ambient pressure. This is shown using the code BUBBLESIM for encapsulated microbubbles, and then the underlying dynamics is investigated using a free bubble model. The ratio of the excitation frequency to the natural frequency of the bubble is the determining parameter—increasing ambient pressure increases natural frequency thereby changing this ratio. For frequency ratio below a lower critical value, increasing ambient pressure monotonically decreases subharmonic response. Above an upper critical value of the same ratio, increasing ambient pressure increases subharmonic response; in between, the subharmonic variation is non-monotonic. The precise values of frequency ratio for these three different trends depend on bubble radius and excitation amplitude. The modeled increase or decrease of subharmonic with ambient pressure, when one happens, is approximately linear only for certain range of excitation levels. Possible reasons for discrepancies between model and previous experiments are discussed. PMID:21476688</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptEn..56a1026S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptEn..56a1026S"><span>Modeling of ablation threshold dependence on pulse duration for dielectrics with ultrashort pulsed laser</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Mingying; Zhu, Jianqiang; Lin, Zunqi</p> <p>2017-01-01</p> <p>We present a numerical model of plasma formation in ultrafast laser ablation on the dielectrics surface. Ablation threshold dependence on pulse duration is predicted with the model and the numerical results for water agrees well with the experimental data for pulse duration from 140 fs to 10 ps. Influences of parameters and approximations of photo- and avalanche-ionization on the ablation threshold prediction are analyzed in detail for various pulse lengths. The calculated ablation threshold is strongly dependent on electron collision time for all the pulse durations. The complete photoionization model is preferred for pulses shorter than 1 ps rather than the multiphoton ionization approximations. The transition time of inverse bremsstrahlung absorption needs to be considered when pulses are shorter than 5 ps and it can also ensure the avalanche ionization (AI) coefficient consistent with that in multiple rate equations (MREs) for pulses shorter than 300 fs. The threshold electron density for AI is only crucial for longer pulses. It is reasonable to ignore the recombination loss for pulses shorter than 100 fs. In addition to thermal transport and hydrodynamics, neglecting the threshold density for AI and recombination could also contribute to the disagreements between the numerical and the experimental results for longer pulses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28979912','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28979912"><span>Models of clinical reasoning with a focus on general practice: A critical review.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yazdani, Shahram; Hosseinzadeh, Mohammad; Hosseini, Fakhrolsadat</p> <p>2017-10-01</p> <p>Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20643911-role-electron-heat-flux-guide-field-magnetic-reconnection','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20643911-role-electron-heat-flux-guide-field-magnetic-reconnection"><span>The role of electron heat flux in guide-field magnetic reconnection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hesse, Michael; Kuznetsova, Masha; Birn, Joachim</p> <p>2004-12-01</p> <p>A combination of analytical theory and particle-in-cell simulations are employed in order to investigate the electron dynamics near and at the site of guide field magnetic reconnection. A detailed analysis of the contributions to the reconnection electric field shows that both bulk inertia and pressure-based quasiviscous processes are important for the electrons. Analytic scaling demonstrates that conventional approximations for the electron pressure tensor behavior in the dissipation region fail, and that heat flux contributions need to be accounted for. Based on the evolution equation of the heat flux three tensor, which is derived in this paper, an approximate form ofmore » the relevant heat flux contributions to the pressure tensor is developed, which reproduces the numerical modeling result reasonably well. Based on this approximation, it is possible to develop a scaling of the electron current layer in the central dissipation region. It is shown that the pressure tensor contributions become important at the scale length defined by the electron Larmor radius in the guide magnetic field.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22415600-semiclassical-wigner-theory-photodissociation-three-dimensions-shedding-light-its-basis','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22415600-semiclassical-wigner-theory-photodissociation-three-dimensions-shedding-light-its-basis"><span>Semiclassical Wigner theory of photodissociation in three dimensions: Shedding light on its basis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Arbelo-González, W.; CNRS, Institut des Sciences Moléculaires, UMR 5255, 33405 Talence; Université Bordeaux, Institut des Sciences Moléculaires, UMR 5255, 33405 Talence</p> <p>2015-04-07</p> <p>The semiclassical Wigner theory (SCWT) of photodissociation dynamics, initially proposed by Brown and Heller [J. Chem. Phys. 75, 186 (1981)] in order to describe state distributions in the products of direct collinear photodissociations, was recently extended to realistic three-dimensional triatomic processes of the same type [Arbelo-González et al., Phys. Chem. Chem. Phys. 15, 9994 (2013)]. The resulting approach, which takes into account rotational motions in addition to vibrational and translational ones, was applied to a triatomic-like model of methyl iodide photodissociation and its predictions were found to be in nearly quantitative agreement with rigorous quantum results, but at a muchmore » lower computational cost, making thereby SCWT a potential tool for the study of polyatomic reaction dynamics. Here, we analyse the main reasons for this agreement by means of an elementary model of fragmentation explicitly dealing with the rotational motion only. We show that our formulation of SCWT makes it a semiclassical approximation to an approximate planar quantum treatment of the dynamics, both of sufficient quality for the whole treatment to be satisfying.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JAP....93.7083L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JAP....93.7083L"><span>Heisenberg model of a {Cr8}-cubane magnetic molecule</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luban, Marshall; Kögerler, Paul; Miller, Lance L.; Winpenny, Richard E. P.</p> <p>2003-05-01</p> <p>A Heisenberg model of eight CrIII paramagnetic centers (spins s=3/2) at the vertices of a cube with four distinct exchange interactions is found to provide a reasonably accurate description of the magnetic susceptibility of the cubane-type magnetic molecule [Cr8O4(O2CC6H5)16]={Cr8} from 2-290 K for an external field of 0.5 T. We find that two exchange bonds are antiferromagnetic (13, 24 K) and two are ferromagnetic (5, 13.5 K), with an accuracy of approximately 1 K. The determination of the four exchange constants is greatly facilitated using the exact high-temperature expansion of the weak-field susceptibility, effectively reducing the number of unknown parameters to two. We have calculated the thermodynamic properties of the system and these can be compared with the results of future experiments. At temperatures below 0.5 K sharp increases are expected in the magnetization versus external magnetic field at approximately 6 and 12 T and higher fields due to level crossings. Inelastic neutron scattering could check our predictions for the low-lying magnetic energy levels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18252580','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18252580"><span>Design of fuzzy systems using neurofuzzy networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Figueiredo, M; Gomide, F</p> <p>1999-01-01</p> <p>This paper introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, nonnoisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9107373','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9107373"><span>Lower molar and incisor displacement associated with mandibular remodeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baumrind, S; Bravo, L A; Ben-Bassat, Y; Curry, S; Korn, E L</p> <p>1997-01-01</p> <p>The purpose of this study was to quantify the amount of alveolar modeling at the apices of the mandibular incisor and first molar specifically associated with appositional and resorptive changes on the lower border of the mandible during growth and treatment. Cephalometric data from superimpositions on anterior cranial base, mandibular implants of the Björk type, and anatomical "best fit" of mandibular border structures were integrated using a recently developed strategy, which is described. Data were available at annual intervals between 8.5 and 15.5 years for a previously described sample of approximately 30 children with implants. The average magnitudes of the changes at the root apices of the mandibular first molar and central incisor associated with modeling/remodeling of the mandibular border and symphysis were unexpectedly small. At the molar apex, mean values approximated zero in both anteroposterior and vertical directions. At the incisor apex, mean values approximated zero in the anteroposterior direction and averaged less than 0.15 mm/year in the vertical direction. Standard deviations were roughly equal for the molar and the incisor in both the anteroposterior and vertical directions. Dental displacement associated with surface modeling plays a smaller role in final tooth position in the mandible than in the maxilla. It may also be reasonably inferred that anatomical best-fit superimpositions made in the absence of implants give a more complete picture of hard tissue turnover in the mandible than they do in the maxilla.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814856P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814856P"><span>Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd</p> <p>2016-04-01</p> <p>In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20120011919&hterms=ionosphere&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dionosphere','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20120011919&hterms=ionosphere&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dionosphere"><span>Variability of Thermosphere and Ionosphere Responses to Solar Flares</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Qian, Liying; Burns, Alan G.; Chamberlin, Philip C.; Solomon, Stanley C.</p> <p>2011-01-01</p> <p>We investigated how the rise rate and decay rate of solar flares affect the thermosphere and ionosphere responses to them. Model simulations and data analysis were conducted for two flares of similar magnitude (X6.2 and X5.4) that had the same location on the solar limb, but the X6.2 flare had longer rise and decay times. Simulated total electron content (TEC) enhancements from the X6.2 and X5.4 flares were 6 total electron content units (TECU) and approximately 2 TECU, and the simulated neutral density enhancements were approximately 15% -20% and approximately 5%, respectively, in reasonable agreement with observations. Additional model simulations showed that for idealized flares with the same magnitude and location, the thermosphere and ionosphere responses changed significantly as a function of rise and decay rates. The Neupert Effect, which predicts that a faster flare rise rate leads to a larger EUV enhancement during the impulsive phase, caused a larger maximum ion production enhancement. In addition, model simulations showed that increased E x B plasma transport due to conductivity increases during the flares caused a significant equatorial anomaly feature in the electron density enhancement in the F region but a relatively weaker equatorial anomaly feature in TEC enhancement, owing to dominant contributions by photochemical production and loss processes. The latitude dependence of the thermosphere response correlated well with the solar zenith angle effect, whereas the latitude dependence of the ionosphere response was more complex, owing to plasma transport and the winter anomaly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CNSNS..54..267N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CNSNS..54..267N"><span>Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nec, Yana</p> <p>2018-01-01</p> <p>Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870046015&hterms=solar+water+heating&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsolar%2Bwater%2Bheating','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870046015&hterms=solar+water+heating&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsolar%2Bwater%2Bheating"><span>Atmospheric solar heating rate in the water vapor bands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Ming-Dah</p> <p>1986-01-01</p> <p>The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960022295','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960022295"><span>On why dynamic subgrid-scale models work</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jimenez, J.</p> <p>1995-01-01</p> <p>Dynamic subgrid models have proved to be remarkably successful in predicting the behavior of turbulent flows. Part of the reasons for their success are well understood. Since they are constructed to generate an effective viscosity which is proportional to some measure of the turbulent energy at the high wavenumber end of the spectrum, their eddy viscosity vanishes as the flow becomes laminar. This alone would justify their use over simpler models. But beyond this obvious advantage, which is confined to inhomogeneous and evolving flows, the reason why they also work better in simpler homogeneous cases, and how they do it without any obvious adjustable parameter, is not clear. This lack of understanding of the internal mechanisms of a useful tool is disturbing, not only as an intellectual challenge, but because it raises the doubt of whether it will work in all cases. This note is an attempt to clarify those mechanisms. We will see why dynamic models are robust and how they can get away with even comparatively gross errors in their formulations. This will suggest that they are only particular cases of a larger family of robust models, all of which would be relatively insensitive to large simplifications in the physics of the flow. We will also construct some such models, although mostly as research tools. It will turn out, however, that the standard dynamic formulation is not only robust to errors, but also behaves as if it were substantially well formulated. The details of why this is so will still not be clear at the end of this note, specially since it will be shown that the 'a priori' testing of the stresses gives, as is usual in most subgrid models, very poor results. But it will be argued that the basic reason is that the dynamic formulation mimics the condition that the total dissipation is approximately equal to the production measured at the test filter level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20180000644&hterms=Lte&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DLte','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20180000644&hterms=Lte&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DLte"><span>Hydrogen Balmer Line Broadening in Solar and Stellar Flares</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kowalski, Adam F.; Allred, Joel C.; Uitenbroek, Han; Tremblay, Pier-Emmanuel; Brown, Stephen; Carlsson, Mats; Osten, Rachel A.; Wisniewski, John P.; Hawley, Suzanne L.</p> <p>2017-01-01</p> <p>The broadening of the hydrogen lines during flares is thought to result from increased charge (electron, proton) density in the flare chromosphere. However, disagreements between theory and modeling prescriptions have precluded an accurate diagnostic of the degree of ionization and compression resulting from flare heating in the chromosphere. To resolve this issue, we have incorporated the unified theory of electric pressure broadening of the hydrogen lines into the non-LTE radiative-transfer code RH. This broadening prescription produces a much more realistic spectrum of the quiescent, A0 star Vega compared to the analytic approximations used as a damping parameter in the Voigt profiles. We test recent radiative-hydrodynamic (RHD) simulations of the atmospheric response to high nonthermal electron beam fluxes with the new broadening prescription and find that the Balmer lines are overbroadened at the densest times in the simulations. Adding many simultaneously heated and cooling model loops as a 'multithread' model improves the agreement with the observations. We revisit the three component phenomenological flare model of the YZ CMi Megaflare using recent and new RHD models. The evolution of the broadening, line flux ratios, and continuum flux ratios are well-reproduced by a multithread model with high-flux nonthermal electron beam heating, an extended decay phase model, and a 'hot spot' atmosphere heated by an ultra relativistic electron beam with reasonable filling factors: approximately 0.1%, 1%, and 0.1% of the visible stellar hemisphere, respectively. The new modeling motivates future work to understand the origin of the extended gradual phase emission.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890008191','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890008191"><span>BRYNTRN: A baryon transport model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.</p> <p>1989-01-01</p> <p>The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990PhRvC..42..778L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990PhRvC..42..778L"><span>Continuum analyzing power for 4He(p-->,p') at 100 MeV</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lawrie, J. J.; Whittal, D. M.; Cowley, A. A.</p> <p>1990-08-01</p> <p>Distorted-wave impulse approximation calculations of the continuum analyzing power for the inclusive reaction 4He(p-->,p') at an incident energy of 100 MeV are presented. In addition to the quasifree knockout of nucleons, contributions from the knockout of deuteron, triton, and helion clusters are taken into account, together with a breakup component. Whereas nucleon knockout by itself does not account for the experimentally observed analyzing power, the inclusion of clusters has a large effect. Thus a simple knockout model is able to provide a reasonable description of the experimental continuum analyzing power.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1367088','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1367088"><span>VISAR Analysis in the Frequency Domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Dolan, D. H.; Specht, P.</p> <p>2017-05-18</p> <p>VISAR measurements are typically analyzed in the time domain, where velocity is approximately proportional to fringe shift. Moving to the frequency domain clarifies the limitations of this approximation and suggests several improvements. For example, optical dispersion preserves high-frequency information, so a zero-dispersion (air delay) interferometer does not provide optimal time resolution. Combined VISAR measurements can also improve time resolution. With adequate bandwidth and reasonable noise levels, it is quite possible to achieve better resolution than the VISAR approximation allows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5611427','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5611427"><span>Models of clinical reasoning with a focus on general practice: A critical review</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>YAZDANI, SHAHRAM; HOSSEINZADEH, MOHAMMAD; HOSSEINI, FAKHROLSADAT</p> <p>2017-01-01</p> <p>Introduction: Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. Methods: A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Results: Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. Conclusion: A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed. PMID:28979912</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT.......149H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT.......149H"><span>Recycle polymer characterization and adhesion modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Holbery, James David</p> <p></p> <p>Contaminants from paper product producers that adversely affect fiber yield have been collected from mills located in three North American geographic regions. Samples have been fractionated using a modified solvent extraction process and subsequently quantitatively characterized and it was found that agglomerates were comprised of the following: approximately 30% extractable polymeric material, 25--35% fiber, 12--15% inorganic material, 15% non-extractable high molecular-weight polyethylene or cross-linked polymers, and 2--4% starch residue. Three representative polymers, paraffin, low-molecular weight polyethylene, and a commercial hot-melt adhesive were selected for further analysis to model the attractive and repulsive behavior using Scanning Probe Microscopy in an aqueous cell. Scanning force probes were characterized using an original technique utilizing a nano-indentation apparatus that is non-destructive and is accurate to within 10% for probes with force constants as low as 1 N/m. Surface force measurements were performed between a Poly (Styrene/30% Butyl Methacrylate) sphere and substrates produced from paraffin, polyethylene, and a commercial hot-melt adhesive in solutions ranging in NaF ionic concentrations from 0.001M to 1M. Reasonable theoretical agreement with experimental data has been shown between a combined model applying van der Waals force contributions using the Derjaguin approximation and electrostatic contributions as predicted by a Debye-Huckel linearization of the Poisson-Boltzmann equation utilizing Hamaker constants derived from critical surface energies determined from Zisman and Lifshitz-van der Waals energy approaches. This model has been applied to measured data and indicates the strength of adhesion for the hot-melt to be 0.14 nN while that of paraffin is 1.9 nN and polyethylene 2.8 nN. Paraffin and polyethylene are 13.5 and 20 times greater in attraction than the hot-melt adhesive. Hot-melt adhesive repulsion is predicted to be 220 pN while for paraffin it is 9.1 nN and polyethylene 12.2 nN, a factor of 41 and 55 greater for paraffin and polyethylene, respectively. Decay lengths for repulsion is fit to be 2.3 nm for hotmelt indicating, approximately one-third that of paraffin and polyethylene. Johnson-Kendall-Roberts contact mechanic theory for viscoelastic materials has been applied with reasonable accuracy, particularly in experiments performed in solutions, to model the approach snap-in magnitude and detachment forces between sphere and substrate. Two representative commercial agglomeration formulations have been analyzed to determine the impact on adhesion and detachment forces although at room temperature, no measurable effect was identified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97k6006R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97k6006R"><span>Quasielastic charged-current neutrino scattering in the scaling model with relativistic effective mass</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ruiz Simo, I.; Martinez-Consentino, V. L.; Amaro, J. E.; Ruiz Arriola, E.</p> <p>2018-06-01</p> <p>We use a recent scaling analysis of the quasielastic electron scattering data from <mml:mmultiscripts>C 12 </mml:mmultiscripts> to predict the quasielastic charge-changing neutrino scattering cross sections within an uncertainty band. We use a scaling function extracted from a selection of the (e ,e') cross section data, and an effective nucleon mass inspired by the relativistic mean-field model of nuclear matter. The corresponding superscaling analysis with relativistic effective mass (SuSAM*) describes a large amount of the electron data lying inside a phenomenological quasielastic band. The effective mass incorporates the enhancement of the transverse current produced by the relativistic mean field. The scaling function incorporates nuclear effects beyond the impulse approximation, in particular meson-exchange currents and short-range correlations producing tails in the scaling function. Besides its simplicity, this model describes the neutrino data as reasonably well as other more sophisticated nuclear models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/893554','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/893554"><span>Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Darby, John L.</p> <p></p> <p>Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090023547','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090023547"><span>Molecular Modeling for Calculation of Mechanical Properties of Epoxies with Moisture Ingress</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Clancy, Thomas C.; Frankland, Sarah J.; Hinkley, J. A.; Gates, T. S.</p> <p>2009-01-01</p> <p>Atomistic models of epoxy structures were built in order to assess the effect of crosslink degree, moisture content and temperature on the calculated properties of a typical representative generic epoxy. Each atomistic model had approximately 7000 atoms and was contained within a periodic boundary condition cell with edge lengths of about 4 nm. Four atomistic models were built with a range of crosslink degree and moisture content. Each of these structures was simulated at three temperatures: 300 K, 350 K, and 400 K. Elastic constants were calculated for these structures by monitoring the stress tensor as a function of applied strain deformations to the periodic boundary conditions. The mechanical properties showed reasonably consistent behavior with respect to these parameters. The moduli decreased with decreasing crosslink degree with increasing temperature. The moduli generally decreased with increasing moisture content, although this effect was not as consistent as that seen for temperature and crosslink degree.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960007620','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960007620"><span>A k-Omega Turbulence Model for Quasi-Three-Dimensional Turbomachinery Flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chima, Rodrick V.</p> <p>1995-01-01</p> <p>A two-equation k-omega turbulence model has been developed and applied to a quasi-three-dimensional viscous analysis code for blade-to-blade flows in turbomachinery. the code includes the effects of rotation, radius change, and variable stream sheet thickness. The flow equations are given and the explicit runge-Kutta solution scheme is described. the k-omega model equations are also given and the upwind implicit approximate-factorization solution scheme is described. Three cases were calculated: transitional flow over a flat plate, a transonic compressor rotor, and transonic turbine vane with heat transfer. Results were compared to theory, experimental data, and to results using the Baldwin-Lomax turbulence model. The two models compared reasonably well with the data and surprisingly well with each other. Although the k-omega model behaves well numerically and simulates effects of transition, freestream turbulence, and wall roughness, it was not decisively better than the Baldwin-Lomax model for the cases considered here.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030032419&hterms=corona&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dcorona','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030032419&hterms=corona&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dcorona"><span>Three-Dimensional MHD Modeling of The Solar Corona and Solar Wind: Comparison with The Wang-Sheeley Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Usmanov, A. V.; Goldstein, M. L.</p> <p>2003-01-01</p> <p>We present simulation results from a tilted-dipole steady-state MHD model of the solar corona and solar wind and compare the output from our model with the Wang-Sheeley model which relates the divergence rate of magnetic flux tubes near the Sun (inferred from solar magnetograms) to the solar wind speed observed near Earth and at Ulysses. The boundary conditions in our model specified at the coronal base and our simulation region extends out to 10 AU. We assumed that a flux of Alfven waves with amplitude of 35 km per second emanates from the Sun and provides additional heating and acceleration for the coronal outflow in the open field regions. The waves are treated in the WKB approximation. The incorporation of wave acceleration allows us to reproduce the fast wind measurements obtained by Ulysses, while preserving reasonable agreement with plasma densities typically found at the coronal base. We find that our simulation results agree well with Wang and Sheeley's empirical model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930013481','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930013481"><span>Improving the chi-squared approximation for bivariate normal tolerance regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Feiveson, Alan H.</p> <p>1993-01-01</p> <p>Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJSS...48..150S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJSS...48..150S"><span>Power law-based local search in spider monkey optimisation for lower order system modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala</p> <p>2017-01-01</p> <p>The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730023914&hterms=IOTA&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DIOTA','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730023914&hterms=IOTA&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DIOTA"><span>Cosmic Ray Hysteresis as Evidence for Time-dependent Diffusive Processes in the Long Term Solar Modulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ogallagher, J. J.</p> <p>1973-01-01</p> <p>A simple one-dimensional time-dependent diffusion-convection model for the modulation of cosmic rays is presented. This model predicts that the observed intensity at a given time is approximately equal to the intensity given by the time independent diffusion convection solution under interplanetary conditions which existed a time iota in the past, (U(t sub o) = U sub s(t sub o - tau)) where iota is the average time spent by a particle inside the modulating cavity. Delay times in excess of several hundred days are possible with reasonable modulation parameters. Interpretation of phase lags observed during the 1969 to 1970 solar maximum in terms of this model suggests that the modulating region is probably not less than 10 a.u. and maybe as much as 35 a.u. in extent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19830034339&hterms=role+stress&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Drole%2Bstress','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19830034339&hterms=role+stress&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Drole%2Bstress"><span>The role of lithospheric stress in the support of the Tharsis rise</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Willemann, R. J.; Turcotte, D. L.</p> <p>1982-01-01</p> <p>It is hypothesized that the Tharsis rise can be approximated as an axisymmetrical igneous construct. Linear theory for the deflection of planetary lithospheres is used to demonstrate that the lithospheric stresses required partially to support the construct are reasonable and consistent with the observed radial grabens around Tharsis. The computed thickness of the elastic lithosphere is between 110 and 260 km, depending of the values assumed for crustal thickness and crustal density. The computed thickness of the Tharsis load ranges from 40 to 70 km. Since in this model the height of the geoid is not specified a priori, the agreement between the observed and computed geoid is evidence for the validity of the model. The tectonics of the Tharsis region are briefly reviewed, and it is contended that all observations are consistent with the loading model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhyA..370....1G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhyA..370....1G"><span>Worrying trends in econophysics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gallegati, Mauro; Keen, Steve; Lux, Thomas; Ormerod, Paul</p> <p>2006-10-01</p> <p>Econophysics has already made a number of important empirical contributions to our understanding of the social and economic world. These fall mainly into the areas of finance and industrial economics, where in each case there is a large amount of reasonably well-defined data. More recently, Econophysics has also begun to tackle other areas of economics where data is much more sparse and much less reliable. In addition, econophysicists have attempted to apply the theoretical approach of statistical physics to try to understand empirical findings. Our concerns are fourfold. First, a lack of awareness of work that has been done within economics itself. Second, resistance to more rigorous and robust statistical methodology. Third, the belief that universal empirical regularities can be found in many areas of economic activity. Fourth, the theoretical models which are being used to explain empirical phenomena. The latter point is of particular concern. Essentially, the models are based upon models of statistical physics in which energy is conserved in exchange processes. There are examples in economics where the principle of conservation may be a reasonable approximation to reality, such as primitive hunter-gatherer societies. But in the industrialised capitalist economies, income is most definitely not conserved. The process of production and not exchange is responsible for this. Models which focus purely on exchange and not on production cannot by definition offer a realistic description of the generation of income in the capitalist, industrialised economies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28519517','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28519517"><span>SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Ssj; Sechopoulos, I</p> <p>2012-06-01</p> <p>To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhDT........30O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhDT........30O"><span>Modeling and analysis of solar distributed generation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ortiz Rivera, Eduardo Ivan</p> <p></p> <p>Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17120637','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17120637"><span>One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lhomme, J; Bouvier, C; Mignot, E; Paquier, A</p> <p>2006-01-01</p> <p>A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4877274','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4877274"><span>Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms☆</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji</p> <p>2016-01-01</p> <p>This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25360109','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25360109"><span>Bayesian networks in neuroscience: a survey.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bielza, Concha; Larrañaga, Pedro</p> <p>2014-01-01</p> <p>Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4199264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4199264"><span>Bayesian networks in neuroscience: a survey</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bielza, Concha; Larrañaga, Pedro</p> <p>2014-01-01</p> <p>Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied. PMID:25360109</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvE..85a1151G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvE..85a1151G"><span>Mean-field approximation for spacing distribution functions in classical systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.</p> <p>2012-01-01</p> <p>We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018Chaos..28c3101O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018Chaos..28c3101O"><span>Complex contagions with timers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oh, Se-Wook; Porter, Mason A.</p> <p>2018-03-01</p> <p>There has been a great deal of effort to try to model social influence—including the spread of behavior, norms, and ideas—on networks. Most models of social influence tend to assume that individuals react to changes in the states of their neighbors without any time delay, but this is often not true in social contexts, where (for various reasons) different agents can have different response times. To examine such situations, we introduce the idea of a timer into threshold models of social influence. The presence of timers on nodes delays adoptions—i.e., changes of state—by the agents, which in turn delays the adoptions of their neighbors. With a homogeneously-distributed timer, in which all nodes have the same amount of delay, the adoption order of nodes remains the same. However, heterogeneously-distributed timers can change the adoption order of nodes and hence the "adoption paths" through which state changes spread in a network. Using a threshold model of social contagions, we illustrate that heterogeneous timers can either accelerate or decelerate the spread of adoptions compared to an analogous situation with homogeneous timers, and we investigate the relationship of such acceleration or deceleration with respect to the timer distribution and network structure. We derive an analytical approximation for the temporal evolution of the fraction of adopters by modifying a pair approximation for the Watts threshold model, and we find good agreement with numerical simulations. We also examine our new timer model on networks constructed from empirical data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCAP...02..042K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCAP...02..042K"><span>Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.</p> <p>2018-02-01</p> <p>Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16854132','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16854132"><span>A molecular dynamics study of the atomic structure of (CaO)x(SiO2)1-x glasses.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mead, Robert N; Mountjoy, Gavin</p> <p>2006-07-27</p> <p>The local atomic environment of Ca in (CaO)x(SiO2)1-x glasses is of interest because of the role of Ca in soda-lime glass, the application of calcium silicate glasses as biomaterials, and the previous experimental measurement of the Ca-Ca correlation in CaSiO(3) glass. Molecular dynamics has been used to obtain models of (CaO)x(SiO2)1-x glasses with x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5, and with approximately 1000 atoms and size approximately 25 A. As expected, the models contain a tetrahedral silica network, the connectivity of which decreases as x increases. In the glass-forming region, i.e., x = 0.4 and 0.5, Ca has a mixture of 6- and 7-fold coordination. Bridging oxygen makes an important contribution to the coordination of Ca, with most bridging oxygens coordinated to 2 Si plus 1 Ca. The x = 0.5 model is in reasonable agreement with previous experimental studies, and does not substantiate the previous theory of cation ordering, which predicted Ca arranged in sheets. In the phase-separated region, i.e., x = 0.1 and 0.2, there is marked clustering of Ca.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20082340-effect-char-structure-burnout-during-pulverized-coal-combustion-pressure','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20082340-effect-char-structure-burnout-during-pulverized-coal-combustion-pressure"><span>The effect of char structure on burnout during pulverized coal combustion at pressure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, G.; Wu, H.; Benfell, K.E.</p> <p></p> <p>An Australian bituminous coal sample was burnt in a drop tube furnace (DTF) at 1 atm and a pressurized drop tube furnace (PDTF) at 15 atm. The char samples were collected at different burnout levels, and a scanning electron microscope was used to examine the structures of chars. A model was developed to predict the burnout of char particles with different structures. The model accounts for combustion of the thin-walled structure of cenospheric char and its fragmentation during burnout. The effect of pressure on reaction rate was also considered in the model. As a result, approximately 40% and 70% cenosphericmore » char particles were observed in the char samples collected after coal pyrolysis in the DTF and PDTF respectively. A large number of fine particles (< 30 mm) were observed in the 1 atm char samples at burnout levels between 30% and 50%, which suggests that significant fragmentation occurred during early combustion. Ash particle size distributions show that a large number of small ash particles formed during burnout at high pressure. The time needed for 70% char burnout at 15 atm is approximately 1.6 times that at 1 atm under the same temperature and gas environment conditions, which is attributed to the different pressures as well as char structures. The overall reaction rate for cenospheric char was predicted to be approximately 2 times that of the dense chars, which is consistent with previous experimental results. The predicted char burnout including char structures agrees reasonably well with the experimental measurements that were obtained at 1 atm and 15 atm pressures.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29900572','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29900572"><span>Hazard ratio estimation and inference in clinical trials with many tied event times.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mehrotra, Devan V; Zhang, Yiwei</p> <p>2018-06-13</p> <p>The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=depression+AND+problem+AND+student&pg=6&id=EJ963360','ERIC'); return false;" href="https://eric.ed.gov/?q=depression+AND+problem+AND+student&pg=6&id=EJ963360"><span>Depressive Symptoms in a Sample of Social Work Students and Reasons Preventing Students from Using Mental Health Services: An Exploratory Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ting, Laura</p> <p>2011-01-01</p> <p>Limited research exists on social work students' level of depression and help-seeking beliefs. This study empirically examined the rates of depression among 215 BSW students and explored students' reasons for not using mental health services. Approximately 50% scored at or above the Center for Epidemiologic Studies Depression Scale cutoff;…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/104761','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/104761"><span>Description of waste pretreatment and interfacing systems dynamic simulation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Garbrick, D.J.; Zimmerman, B.D.</p> <p>1995-05-01</p> <p>The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggestedmore » average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24327067','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24327067"><span>Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu</p> <p>2015-06-01</p> <p>Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2331975','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2331975"><span>Encouraging the practice of testicular self-examination: a field application of the theory of reasoned action.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brubaker, R G; Wickersham, D</p> <p>1990-01-01</p> <p>Identified factors associated with testicular self-examination (TSE) within the context of the theory of reasoned action. Subjects (232 male college students) received instruction in TSE and completed a questionnaire operationalizing the components of the theoretical model. During the following 6 weeks, a field intervention was conducted in which approximately half the subjects were exposed to posters reminding them to perform the exam. Multiple-regression analyses revealed that intention to perform TSE correlated significantly with attitude and subjective norm and that consideration of self-efficacy and TSE knowledge improved the prediction of intention. Significant differences in outcome expectancies and normative beliefs were found between subjects who intended to perform the exam and those who did not. Intention was moderately (r = .30, p less than .001) correlated with behavior; the intention-behavior correlation, however, was stronger among subjects who intended to perform the exam and were exposed to the posters (r = .55, p less than .001).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CNSNS..57...47D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CNSNS..57...47D"><span>Estimating the boundaries of a limit cycle in a 2D dynamical system using renormalization group</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dutta, Ayan; Das, Debapriya; Banerjee, Dhruba; Bhattacharjee, Jayanta K.</p> <p>2018-04-01</p> <p>While the plausibility of formation of limit cycle has been a well studied topic in context of the Poincare-Bendixson theorem, studies on estimates in regard to the possible size and shape of the limit cycle seem to be scanty in the literature. In this paper we present a pedagogical study of some aspects of the size of this limit cycle using perturbative renormalization group by doing detailed and explicit calculations upto second order for the Selkov model for glycolytic oscillations. This famous model is well known to lead to a limit cycle for certain ranges of values of the parameters involved in the problem. Within the tenets of the approximations made, reasonable agreement with the numerical plots can be achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17702005Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17702005Z"><span>Topology-based description of the NCA cathode configurational space and an approach of its effective reduction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zolotarev, Pavel; Eremin, Roman</p> <p>2018-04-01</p> <p>Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010068934','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010068934"><span>Cloud Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)</p> <p>2001-01-01</p> <p>Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG21A0132A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG21A0132A"><span>When is the Anelastic Approximation a Valid Model for Compressible Convection?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alboussiere, T.; Curbelo, J.; Labrosse, S.; Ricard, Y. R.; Dubuffet, F.</p> <p>2017-12-01</p> <p>Compressible convection is ubiquitous in large natural systems such Planetary atmospheres, stellar and planetary interiors. Its modelling is notoriously more difficult than the case when the Boussinesq approximation applies. One reason for that difficulty has been put forward by Ogura and Phillips (1961): the compressible equations generate sound waves with very short time scales which need to be resolved. This is why they introduced an anelastic model, based on an expansion of the solution around an isentropic hydrostatic profile. How accurate is that anelastic model? What are the conditions for its validity? To answer these questions, we have developed a numerical model for the full set of compressible equations and compared its solutions with those of the corresponding anelastic model. We considered a simple rectangular 2D Rayleigh-Bénard configuration and decided to restrict the analysis to infinite Prandtl numbers. This choice is valid for convection in the mantles of rocky planets, but more importantly lead to a zero Mach number. So we got rid of the question of the interference of acoustic waves with convection. In that simplified context, we used the entropy balances (that of the full set of equations and that of the anelastic model) to investigate the differences between exact and anelastic solutions. We found that the validity of the anelastic model is dictated by two conditions: first, the superadiabatic temperature difference must be small compared with the adiabatic temperature difference (as expected) ɛ = Δ TSA / delta Ta << 1, and secondly that the product of ɛ with the Nusselt number must be small.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhFl...25f2002G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhFl...25f2002G"><span>A Fokker-Planck based kinetic model for diatomic rarefied gas flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gorji, M. Hossein; Jenny, Patrick</p> <p>2013-06-01</p> <p>A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22978639','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22978639"><span>ReactionPredictor: prediction of complex chemical reactions at the mechanistic level using machine learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kayala, Matthew A; Baldi, Pierre</p> <p>2012-10-22</p> <p>Proposing reasonable mechanisms and predicting the course of chemical reactions is important to the practice of organic chemistry. Approaches to reaction prediction have historically used obfuscating representations and manually encoded patterns or rules. Here we present ReactionPredictor, a machine learning approach to reaction prediction that models elementary, mechanistic reactions as interactions between approximate molecular orbitals (MOs). A training data set of productive reactions known to occur at reasonable rates and yields and verified by inclusion in the literature or textbooks is derived from an existing rule-based system and expanded upon with manual curation from graduate level textbooks. Using this training data set of complex polar, hypervalent, radical, and pericyclic reactions, a two-stage machine learning prediction framework is trained and validated. In the first stage, filtering models trained at the level of individual MOs are used to reduce the space of possible reactions to consider. In the second stage, ranking models over the filtered space of possible reactions are used to order the reactions such that the productive reactions are the top ranked. The resulting model, ReactionPredictor, perfectly ranks polar reactions 78.1% of the time and recovers all productive reactions 95.7% of the time when allowing for small numbers of errors. Pericyclic and radical reactions are perfectly ranked 85.8% and 77.0% of the time, respectively, rising to >93% recovery for both reaction types with a small number of allowed errors. Decisions about which of the polar, pericyclic, or radical reaction type ranking models to use can be made with >99% accuracy. Finally, for multistep reaction pathways, we implement the first mechanistic pathway predictor using constrained tree-search to discover a set of reasonable mechanistic steps from given reactants to given products. Webserver implementations of both the single step and pathway versions of ReactionPredictor are available via the chemoinformatics portal http://cdb.ics.uci.edu/.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850014182','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850014182"><span>Surface interactions and high-voltage current collection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mandell, M. J.; Katz, I.</p> <p>1985-01-01</p> <p>Spacecraft of the future will be larger and have higher power requirements than any flown to date. For several reasons, it is desirable to operate a high power system at high voltage. While the optimal voltages for many future missions are in the range 500 to 5000 volts, the highest voltage yet flown is approximately 100 volts. The NASCAP/LEO code is being developed to embody the phenomenology needed to model the environmental interactions of high voltage spacecraft. Some plasma environment are discussed. The treatment of the surface conductivity associated with emitted electrons and some simulations by NASCAP/LEO of ground based high voltage interaction experiments are described.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhLA..382..787F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhLA..382..787F"><span>On binding energy of trions in bulk materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Filikhin, Igor; Kezerashvili, Roman Ya.; Vlahovic, Branislav</p> <p>2018-03-01</p> <p>We study the negatively T- and positively T+ charged trions in bulk materials in the effective mass approximation within the framework of a potential model. The binding energies of trions in various semiconductors are calculated by employing Faddeev equation in configuration space. Results of calculations of the binding energies for T- are consistent with previous computational studies and are in reasonable agreement with experimental measurements, while the T+ is unbound for all considered cases. The mechanism of formation of the binding energy of trions is analyzed by comparing contributions of a mass-polarization term related to kinetic energy operators and a term related to the Coulomb repulsion of identical particles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860030266&hterms=energia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Denergia','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860030266&hterms=energia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Denergia"><span>Studies of electron-polyatomic-molecule collisions Applications to e-CH4</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lima, M. A. P.; Gibson, T. L.; Mckoy, V.; Huo, W. M.</p> <p>1985-01-01</p> <p>The first application of the Schwinger multichannel formulation to low-energy electron collisions with a nonlinear polyatomic target is reported. Integral and differential cross sections are obtained for e-CH4 collisions from 3 to 20 eV at the static-plus-exchange interaction level. In these studies, the exchange potential is directly evaluated and not approximated by local models. An interesting feature of the small-angle differential cross section is ascribed to polarization effects and not reproduced at the static-plus-exchange level. These differential cross sections are found to be in reasonable agreement with existing measurements at 7.5 eV and higher energies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.500c2001A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.500c2001A"><span>An equation of state for polyurea aerogel based on multi-shock response</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aslam, T. D.; Gustavsen, R. L.; Bartram, B. D.</p> <p>2014-05-01</p> <p>The equation of state (EOS) of polyurea aerogel (PUA) is examined through both single shock Hugoniot data as well as more recent multi-shock compression experiments performed on the LANL 2-stage gas gun. A simple conservative Lagrangian numerical scheme, utilizing total variation diminishing (TVD) interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It will been demonstrated that a p-a model based on a Mie-Gruneisen fitting form for the solid material can reasonably replicate multi-shock compression response at a variety of initial densities; such a methodology will be presented for a commercially available polyurea aerogel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010038421','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010038421"><span>Dynamic Modeling and Testing of MSRR-1 for Use in Microgravity Environments Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gattis, Christy; LaVerde, Bruce; Howell, Mike; Phelps, Lisa H. (Technical Monitor)</p> <p>2001-01-01</p> <p>Delicate microgravity science is unlikely to succeed on the International Space Station if vibratory and transient disturbers corrupt the environment. An analytical approach to compute the on-orbit acceleration environment at science experiment locations within a standard payload rack resulting from these disturbers is presented. This approach has been grounded by correlation and comparison to test verified transfer functions. The method combines the results of finite element and statistical energy analysis using tested damping and modal characteristics to provide a reasonable approximation of the total root-mean-square (RMS) acceleration spectra at the interface to microgravity science experiment hardware.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995SPIE.2527...32H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995SPIE.2527...32H"><span>Molecular ub figure-of-merit studies of solid solutions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Healy, David; Thomas, Philip R.; Szablewski, Marek; Cross, Graham H.</p> <p>1995-10-01</p> <p>The dipole moments ((mu) ) of a series of zwitterionic nonlinear optical chromophores doped into poly(methyl methacrylate) have been determined. Values of between 34 D and 38 D have been measured through the fitting of a uncurtailed Langevin function to the incidence angle dependence of the p-p second harmonic intensity generated from corona poled films. It is shown that accurate values of dipole moment can only be determined when the poling fields are lower than approximately 100 MVm-1 above which existing electric field poling models appear to be inadequate. The reasons for this are as yet unknown, possible mechanisms of the effect are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850014177','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850014177"><span>Wakes and differential charging of large bodies in low Earth orbit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Parker, L. W.</p> <p>1985-01-01</p> <p>Highlights of earlier results using the Inside-Out WAKE code on wake structures of LEO spacecraft are reviewed. For conducting bodies of radius large compared with the Debye length, a high Mach number wake develops a negative potential well. Quasineutrality is violated in the very near wake region, and the wake is relatively empty for a distance downstream of about one half of a Mach number of radii. There is also a suggestion of a core of high density along the axis. A comparison of rigorous numerical solutions with in situ wake data from the AE-C satellite suggests that the so called neutral approximation for ions (straight line trajectories, independent of fields) may be a reasonable approximation except near the center of the near wake. This approximation is adopted for very large bodies. Work concerned with the wake point potential of very large nonconducting bodies such as the shuttle orbiter is described. Using a cylindrical model for bodies of this size or larger in LEO (body radius up to 10 to the 5th power Debye lengths), approximate solutions are presented based on the neutral approximation (but with rigorous trajectory calculations for surface current balance). There is a negative potential well if the body is conducting, and no well if the body is nonconducting. In the latter case the wake surface itself becomes highly negative. The wake point potential is governed by the ion drift energy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24697424','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24697424"><span>Including screening in van der Waals corrected density functional theory calculations: the case of atoms and small molecules physisorbed on graphene.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Silvestrelli, Pier Luigi; Ambrosetti, Alberto</p> <p>2014-03-28</p> <p>The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18426236','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18426236"><span>Pyroelectricity of water ice.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Hanfu; Bell, Richard C; Iedema, Martin J; Schenter, Gregory K; Wu, Kai; Cowin, James P</p> <p>2008-05-22</p> <p>Water ice usually is thought to have zero pyroelectricity by symmetry. However, biasing it with ions breaks the symmetry because of the induced partial dipole alignment. This unmasks a large pyroelectricity. Ions were soft-landed upon 1 mum films of water ice at temperatures greater than 160 K. When cooled below 140-150 K, the dipole alignment locks in. Work function measurements of these films then show high and reversible pyroelectric activity from 30 to 150 K. For an initial approximately 10 V induced by the deposited ions at 160 K, the observed bias below 150 K varies approximately as 10 Vx(T/150 K)2. This implies that water has pyroelectric coefficients as large as that of many commercial pyroelectrics, such as lead zirconate titanate (PZT). The pyroelectricity of water ice, not previously reported, is in reasonable agreement with that predicted using harmonic analysis of a model system of SPC ice. The pyroelectricity is observed in crystalline and compact amorphous ice, deuterated or not. This implies that for water ice between 0 and 150 K (such as astrophysical ices), temperature changes can induce strong electric fields (approximately 10 MV/m) that can influence their chemistry, ion trajectories, or binding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4505828','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4505828"><span>A Mechanistic Pharmacokinetic Model for Liver Transporter Substrates Under Liver Cirrhosis Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, R; Barton, HA; Maurer, TS</p> <p>2015-01-01</p> <p>Liver cirrhosis is a disease characterized by the loss of functional liver mass. Physiologically based pharmacokinetic (PBPK) modeling was applied to interpret and predict how the interplay among physiological changes in cirrhosis affects pharmacokinetics. However, previous PBPK models under cirrhotic conditions were developed for permeable cytochrome P450 substrates and do not directly apply to substrates of liver transporters. This study characterizes a PBPK model for liver transporter substrates in relation to the severity of liver cirrhosis. A published PBPK model structure for liver transporter substrates under healthy conditions and the physiological changes for cirrhosis are combined to simulate pharmacokinetics of liver transporter substrates in patients with mild and moderate cirrhosis. The simulated pharmacokinetics under liver cirrhosis reasonably approximate observations. This analysis includes meta-analysis to obtain system-dependent parameters in cirrhosis patients and a top-down approach to improve understanding of the effect of cirrhosis on transporter-mediated drug disposition under cirrhotic conditions. PMID:26225262</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S21B2688T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S21B2688T"><span>A Bayesian approach to modeling 2D gravity data using polygon states</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Titus, W. J.; Titus, S.; Davis, J. R.</p> <p>2015-12-01</p> <p>We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19970023970&hterms=1601&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2526%25231601','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19970023970&hterms=1601&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2526%25231601"><span>Decay of Far-Flowfield in Trailing Vortices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Baldwin, B. S.; Chigier, N. A.; Sheaffer, Y. S.</p> <p>1973-01-01</p> <p>Methods for reduction of velocities in trailing vortices of large aircraft are of current interest for the purpose of shortening the waiting time between landings at central airports. We have made finite-difference calculations of the flow in turbulent wake vortices as an aid to interpretation of wind-tunnel and flight experiments directed toward that end. Finite-difference solutions are capable of adding flexibility to such investigations if they are based on an adequate model of turbulence. Interesting developments have been taking place in the knowledge of turbulence that may lead to a complete theory in the future. In the meantime, approximate methods that yield reasonable agreement with experiment are appropriate. The simplified turbulence model we have selected contains features that account for the major effects disclosed by more sophisticated models in which the parameters are not yet established. Several puzzles are thereby resolved that arose in previous theoretical investigations of wake vortices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1411609-stochastic-gain-degradation-iii-heterojunction-bipolar-transistors-due-single-particle-displacement-damage','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1411609-stochastic-gain-degradation-iii-heterojunction-bipolar-transistors-due-single-particle-displacement-damage"><span>Stochastic Gain Degradation in III-V Heterojunction Bipolar Transistors due to Single Particle Displacement Damage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Vizkelethy, Gyorgy; Bielejec, Edward S.; Aguirre, Brandon A.</p> <p>2017-11-13</p> <p>As device dimensions decrease single displacement effects are becoming more important. We measured the gain degradation in III-V Heterojunction Bipolar Transistors due to single particles using a heavy ion microbeam. Two devices with different sizes were irradiated with various ion species ranging from oxygen to gold to study the effect of the irradiation ion mass on the gain change. From the single steps in the inverse gain (which is proportional to the number of defects) we calculated Cumulative Distribution Functions to help determine design margins. The displacement process was modeled using the Marlowe Binary Collision Approximation (BCA) code. The entiremore » structure of the device was modeled and the defects in the base-emitter junction were counted to be compared to the experimental results. While we found good agreement for the large device, we had to modify our model to reach reasonable agreement for the small device.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26872234','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26872234"><span>Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rivera, Francisco Javier Uribe; Artmann, Elizabeth</p> <p>2015-12-01</p> <p>This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1411609-stochastic-gain-degradation-iii-heterojunction-bipolar-transistors-due-single-particle-displacement-damage','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1411609-stochastic-gain-degradation-iii-heterojunction-bipolar-transistors-due-single-particle-displacement-damage"><span>Stochastic Gain Degradation in III-V Heterojunction Bipolar Transistors due to Single Particle Displacement Damage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vizkelethy, Gyorgy; Bielejec, Edward S.; Aguirre, Brandon A.</p> <p></p> <p>As device dimensions decrease single displacement effects are becoming more important. We measured the gain degradation in III-V Heterojunction Bipolar Transistors due to single particles using a heavy ion microbeam. Two devices with different sizes were irradiated with various ion species ranging from oxygen to gold to study the effect of the irradiation ion mass on the gain change. From the single steps in the inverse gain (which is proportional to the number of defects) we calculated Cumulative Distribution Functions to help determine design margins. The displacement process was modeled using the Marlowe Binary Collision Approximation (BCA) code. The entiremore » structure of the device was modeled and the defects in the base-emitter junction were counted to be compared to the experimental results. While we found good agreement for the large device, we had to modify our model to reach reasonable agreement for the small device.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900054790&hterms=poe&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dpoe','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900054790&hterms=poe&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dpoe"><span>The steady state solutions of radiatively driven stellar winds for a non-Sobolev, pure absorption model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Poe, C. H.; Owocki, S. P.; Castor, J. I.</p> <p>1990-01-01</p> <p>The steady state solution topology for absorption line-driven flows is investigated for the condition that the Sobolev approximation is not used to compute the line force. The solution topology near the sonic point is of the nodal type with two positive slope solutions. The shallower of these slopes applies to reasonable lower boundary conditions and realistic ion thermal speed v(th) and to the Sobolev limit of zero of the usual Castor, Abbott, and Klein model. At finite v(th), this solution consists of a family of very similar solutions converging on the sonic point. It is concluded that a non-Sobolev, absorption line-driven flow with a realistic values of v(th) has no uniquely defined steady state. To the extent that a pure absorption model of the outflow of stellar winds is applicable, radiatively driven winds should be intrinsically variable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10721498','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10721498"><span>Substance use disorders in schizophrenia: review, integration, and a proposed model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Blanchard, J J; Brown, S A; Horan, W P; Sherwood, A R</p> <p>2000-03-01</p> <p>Substance use disorders occur in approximately 40 to 50% of individuals with schizophrenia. Clinically, substance use disorders are associated with a variety of negative outcomes in schizophrenia, including incarceration, homelessness, violence, and suicide. An understanding of the reasons for such high rates of substance use disorders may yield insights into the treatment of this comorbidity in schizophrenia. This review summarizes methodological and conceptual issues concerning the study of substance use disorders in schizophrenia and provides a review of the prevalence of this co-occurrence. Prevailing theories regarding the co-occurrence of schizophrenia and substance use disorders are reviewed. Little empirical support is found for models suggesting that schizophrenic symptoms lead to substance use (self-medication), that substance use leads to schizophrenia, or that there is a genetic relationship between schizophrenia and substance use. An integrative affect-regulation model incorporating individual differences in traits and responses to stress is proposed for future study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8a5003L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8a5003L"><span>Simulation of a large size inductively coupled plasma generator and comparison with experimental data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, Fan; Li, Xiaoping; Liu, Yanming; Liu, Donglin; Yang, Min; Yu, Yuanyuan</p> <p>2018-01-01</p> <p>A two-dimensional axisymmetric inductively coupled plasma (ICP) model with its implementation in the COMSOL (Multi-physics simulation software) platform is described. Specifically, a large size ICP generator filled with argon is simulated in this study. Distributions of the number density and temperature of electrons are obtained for various input power and pressure settings and compared. In addition, the electron trajectory distribution is obtained in simulation. Finally, using experimental data, the results from simulations are compared to assess the veracity of the two-dimensional fluid model. The purpose of this comparison is to validate the veracity of the simulation model. An approximate agreement was found (variation tendency is the same). The main reasons for the numerical magnitude discrepancies are the assumption of a Maxwellian distribution and a Druyvesteyn distribution for the electron energy and the lack of cross sections of collision frequencies and reaction rates for argon plasma.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25958976','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25958976"><span>Dimensionality of the 9-item Utrecht Work Engagement Scale revisited: A Bayesian structural equation modeling approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fong, Ted C T; Ho, Rainbow T H</p> <p>2015-01-01</p> <p>The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApGeo..14..463Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApGeo..14..463Q"><span>Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang</p> <p>2017-12-01</p> <p>The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1929043','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1929043"><span>Static Light Scattering from Concentrated Protein Solutions, I: General Theory for Protein Mixtures and Application to Self-Associating Proteins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Minton, Allen P.</p> <p>2007-01-01</p> <p>Exact expressions for the static light scattering of a solution containing up to three species of point-scattering solutes in highly nonideal solutions at arbitrary concentration are obtained from multicomponent scattering theory. Explicit expressions for thermodynamic interaction between solute molecules, required to evaluate the scattering relations, are obtained using an equivalent hard particle approximation similar to that employed earlier to interpret scattering of a single protein species at high concentration. The dependence of scattering intensity upon total protein concentration is calculated for mixtures of nonassociating proteins and for a single self-associating protein over a range of concentrations up to 200 g/l. An approximate semiempirical analysis of the concentration dependence of scattering intensity is proposed, according to which the contribution of thermodynamic interaction to scattering intensity is modeled as that of a single average hard spherical species. Simulated data containing pseudo-noise comparable in magnitude to actual experimental uncertainty are modeled using relations obtained from the proposed semiempirical analysis. It is shown that by using these relations one can extract from the data reasonably reliable information about underlying weak associations that are manifested only at very high total protein concentration. PMID:17526566</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050109882','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050109882"><span>Transient Approximation of SAFE-100 Heat Pipe Operation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bragg-Sitton, Shannon M.; Reid, Robert S.</p> <p>2005-01-01</p> <p>Engineers at Los Alamos National Laboratory (LANL) have designed several heat pipe cooled reactor concepts, ranging in power from 15 kWt to 800 kWt, for both surface power systems and nuclear electric propulsion systems. The Safe, Affordable Fission Engine (SAFE) is now being developed in a collaborative effort between LANL and NASA Marshall Space Flight Center (NASA/MSFC). NASA is responsible for fabrication and testing of non-nuclear, electrically heated modules in the Early Flight Fission Test Facility (EFF-TF) at MSFC. In-core heat pipes must be properly thawed as the reactor power starts. Computational models have been developed to assess the expected operation of a specific heat pipe design during start-up, steady state operation, and shutdown. While computationally intensive codes provide complete, detailed analyses of heat pipe thaw, a relatively simple. concise routine can also be applied to approximate the response of a heat pipe to changes in the evaporator heat transfer rate during start-up and power transients (e.g., modification of reactor power level) with reasonably accurate results. This paper describes a simplified model of heat pipe start-up that extends previous work and compares the results to experimental measurements for a SAFE-100 type heat pipe design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA232140','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA232140"><span>Advanced Methods of Approximate Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1990-11-30</p> <p>about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22400556','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22400556"><span>Mean-field approximation for spacing distribution functions in classical systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Diego Luis; Pimpinelli, Alberto; Einstein, T L</p> <p>2012-01-01</p> <p>We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140011861','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140011861"><span>Long-term Ozone Changes and Associated Climate Impacts in CMIP5 Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Eyring, V.; Arblaster, J. M.; Cionni, I.; Sedlacek, J.; Perlwitz, J.; Young, P. J.; Bekki, S.; Bergmann, D.; Cameron-Smith, P.; Collins, W. J.; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20140011861'); toggleEditAbsImage('author_20140011861_show'); toggleEditAbsImage('author_20140011861_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20140011861_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20140011861_hide"></p> <p>2013-01-01</p> <p>Ozone changes and associated climate impacts in the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations are analyzed over the historical (1960-2005) and future (2006-2100) period under four Representative Concentration Pathways (RCP). In contrast to CMIP3, where half of the models prescribed constant stratospheric ozone, CMIP5 models all consider past ozone depletion and future ozone recovery. Multimodel mean climatologies and long-term changes in total and tropospheric column ozone calculated from CMIP5 models with either interactive or prescribed ozone are in reasonable agreement with observations. However, some large deviations from observations exist for individual models with interactive chemistry, and these models are excluded in the projections. Stratospheric ozone projections forced with a single halogen, but four greenhouse gas (GHG) scenarios show largest differences in the northern midlatitudes and in the Arctic in spring (approximately 20 and 40 Dobson units (DU) by 2100, respectively). By 2050, these differences are much smaller and negligible over Antarctica in austral spring. Differences in future tropospheric column ozone are mainly caused by differences in methane concentrations and stratospheric input, leading to approximately 10DU increases compared to 2000 in RCP 8.5. Large variations in stratospheric ozone particularly in CMIP5 models with interactive chemistry drive correspondingly large variations in lower stratospheric temperature trends. The results also illustrate that future Southern Hemisphere summertime circulation changes are controlled by both the ozone recovery rate and the rate of GHG increases, emphasizing the importance of simulating and taking into account ozone forcings when examining future climate projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030014276&hterms=vertical+height&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dvertical%2Bheight','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030014276&hterms=vertical+height&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dvertical%2Bheight"><span>Volcanic Plume Heights on Mars: Limits of Validity for Convective Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Glaze, Lori S.; Baloga, Stephen M.</p> <p>2002-01-01</p> <p>Previous studies have overestimated volcanic plume heights on Mars. In this work, we demonstrate that volcanic plume rise models, as currently formulated, have only limited validity in any environment. These limits are easily violated in the current Mars environment and may also be violated for terrestrial and early Mars conditions. We indicate some of the shortcomings of the model with emphasis on the limited applicability to current Mars conditions. Specifically, basic model assumptions are violated when (1) vertical velocities exceed the speed of sound, (2) radial expansion rates exceed the speed of sound, (3) radial expansion rates approach or exceed the vertical velocity, or (4) plume radius grossly exceeds plume height. All of these criteria are violated for the typical Mars example given here. Solutions imply that the convective rise, model is only valid to a height of approximately 10 kilometers. The reason for the model breakdown is hat the current Mars atmosphere is not of sufficient density to satisfy the conservation equations. It is likely that diffusion and other effects governed by higher-order differential equations are important within the first few kilometers of rise. When the same criteria are applied to eruptions into a higher-density early Mars atmosphere, we find that eruption rates higher than 1.4 x 10(exp 9) kilograms per second also violate model assumptions. This implies a maximum extent of approximately 65 kilometers for convective plumes on early Mars. The estimated plume heights for both current and early Mars are significantly lower than those previously predicted in the literature. Therefore, global-scale distribution of ash seems implausible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003PhDT.......215C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003PhDT.......215C"><span>Optical properties of carbon nanotubes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Gugang</p> <p></p> <p>This thesis addresses the optical properties of novel carbon filamentary nanomaterials: single-walled carbon nanotubes (SWNTs), double-walled carbon nanotubes (DWNTs), and SWNTs with interior C60 molecules ("peapods"). Optical reflectance spectra of bundled SWNTs are discussed in terms of their electronic energy band structure. An Effective Medium Model for a composite material was found to provide a reasonable description of the spectra. Furthermore, we have learned from optical absorption studies of DWNTs and C60-peapods that the host tube and the encapsulant interact weakly; small shifts in interband absorption structure were observed. Resonant Raman scattering studies on SWNTs synthesized via the HiPCO process show that the "zone-folding" approximation for phonons and electrons works reasonably well, even for small diameter (d < 1 nm) tubes. The energy of optical transitions between van Hove singularities in the electronic density of states computed from the "zone-folding" model (with gamma0 = 2.9 eV) agree well with the resonant conditions for Raman scattering. Small diameter tubes were found to exhibit additional sharp Raman bands in the frequency range 500-1200 cm-1 with an, as yet, undetermined origin. The Raman spectrum of a DWNT was found to be well described by a superposition of the Raman spectra expected for inner and outer tubes, i.e., no charge transfer occurs and the weak van der Waals (vdW) interaction between tubes does not have significant impact on the phonons. A ˜7 cm-1 downshift of the small diameter, inner-tube tangential mode frequency was observed, however, but attributed to a tube wall curvature effect, rather than the vdW interaction. Finally, we studied the chemical doping of DWNTs, where the dopant (Br anions) is chemically bound to the outside of the outer tube. The doped DWNT system is a model for a cylindrical molecular capacitor. We found experimentally that 90% of the positive charge resides on the outer tube, so that most of electric field on the inner tube is screened, i.e., we have observed a molecular Faraday cage effect. A self-consistent theoretical model in the tight-binding approximation with a classical electrostatic energy term is in good agreement with our experimental results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27004116','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27004116"><span>Prediction of meat spectral patterns based on optical properties and concentrations of the major constituents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>ElMasry, Gamal; Nakauchi, Shigeki</p> <p>2016-03-01</p> <p>A simulation method for approximating spectral signatures of minced meat samples was developed depending on concentrations and optical properties of the major chemical constituents. Minced beef samples of different compositions scanned on a near-infrared spectroscopy and on a hyperspectral imaging system were examined. Chemical composition determined heuristically and optical properties collected from authenticated references were simulated to approximate samples' spectral signatures. In short-wave infrared range, the resulting spectrum equals the sum of the absorption of three individual absorbers, that is, water, protein, and fat. By assuming homogeneous distributions of the main chromophores in the mince samples, the obtained absorption spectra are found to be a linear combination of the absorption spectra of the major chromophores present in the sample. Results revealed that developed models were good enough to derive spectral signatures of minced meat samples with a reasonable level of robustness of a high agreement index value more than 0.90 and ratio of performance to deviation more than 1.4.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012GeoRL..3924305M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012GeoRL..3924305M"><span>Evolution of microstructure and elastic wave velocities in dehydrated gypsum samples</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Milsch, Harald; Priegnitz, Mike</p> <p>2012-12-01</p> <p>We report on changes in P and S-wave velocities and rock microstructure induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air, at ambient pressure, and temperatures between 378 and 423 K. Dehydration did not proceed homogeneously but via a reaction front moving sample inwards separating an outer highly porous rim from the remaining gypsum which, above approximately 393 (±5) K, concurrently decomposed into hemihydrate. Overall porosity was observed to continuously increase with reaction progress from approximately 2% for fully hydrated samples to 30% for completely dehydrated ones. Concurrently, P and S-wave velocities linearly decreased with porosity from 5.2 and 2.7 km/s to 1.0 and 0.7 km/s, respectively. It is concluded that a linearized empirical Raymer-type model extended by a critical porosity term and based on the respective time dependent mineral and pore volumes reasonably replicates the P and S-wave data in relation to reaction progress and porosity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26605467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26605467"><span>Stability of Hydrocarbons of the Polyhedrane Family: Convergence of ab Initio Calculations and Corresponding Assessment of DFT Main Approximations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sancho-García, J C</p> <p>2011-09-13</p> <p>Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1015e2002B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1015e2002B"><span>Speed Approach for UAV Collision Avoidance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.</p> <p>2018-05-01</p> <p>The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.574a2041T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.574a2041T"><span>Combining a reactive potential with a harmonic approximation for molecular dynamics simulation of failure: construction of a reduced potential</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tejada, I. G.; Brochard, L.; Stoltz, G.; Legoll, F.; Lelièvre, T.; Cancès, E.</p> <p>2015-01-01</p> <p>Molecular dynamics is a simulation technique that can be used to study failure in solids, provided the inter-atomic potential energy is able to account for the complex mechanisms at failure. Reactive potentials fitted on ab initio results or on experimental values have the ability to adapt to any complex atomic arrangement and, therefore, are suited to simulate failure. But the complexity of these potentials, together with the size of the systems considered, make simulations computationally expensive. In order to improve the efficiency of numerical simulations, simpler harmonic potentials can be used instead of complex reactive potentials in the regions where the system is close to its ground state and a harmonic approximation reasonably fits the actual reactive potential. However the validity and precision of such an approach has not been investigated in detail yet. We present here a methodology for constructing a reduced potential and combining it with the reactive one. We also report some important features of crack propagation that may be affected by the coupling of reactive and reduced potentials. As an illustrative case, we model a crystalline two-dimensional material (graphene) with a reactive empirical bond-order potential (REBO) or with harmonic potentials made of bond and angle springs that are designed to reproduce the second order approximation of REBO in the ground state. We analyze the consistency of this approximation by comparing the mechanical behavior and the phonon spectra of systems modeled with these potentials. These tests reveal when the anharmonicity effects appear. As anharmonic effects originate from strain, stress or temperature, the latter quantities are the basis for establishing coupling criteria for on the fly substitution in large simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170002677&hterms=black&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dblack','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170002677&hterms=black&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dblack"><span>The Accretion Disk Wind in the Black Hole GRS 1915 + 105</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Miller, J.M.; Raymond, J.; Fabian, A. C.; Gallo, E.; Kaastra, J.; Kallman, T.; King, A. L.; Proga, D.; Reynolds, C. S.; Zoghbi, A.</p> <p>2016-01-01</p> <p>We report on a 120 kiloseconds Chandra/HETG spectrum of the black hole GRS 1915+105. The observation was made during an extended and bright soft state in 2015 June. An extremely rich disk wind absorption spectrum is detected, similar to that observed at lower sensitivity in 2007. The very high resolution of the third-order spectrum reveals four components to the disk wind in the Fe K band alone; the fastest has a blueshift of v = 0.03 c (velocity equals 0.03 the speed of light). Broadened reemission from the wind is also detected in the first-order spectrum, giving rise to clear accretion disk P Cygni profiles. Dynamical modeling of the re-emission spectrum gives wind launching radii of r approximately equal to 10 (sup 2-4) GM (Gravitational constant times Mass) divided by c (sup 2) (the speed of light squared). Wind density values of n approximately equal to 10 (sup 13-16) per cubic centimeter are then required by the ionization parameter formalism. The small launching radii, high density values, and inferred high mass outflow rates signal a role for magnetic driving. With simple, reasonable assumptions, the wind properties constrain the magnitude of the emergent magnetic field to be B approximately equal to 10 (sup 3-4) G (Gravitational constant) if the wind is driven via magnetohydrodynamic (MHD) pressure from within the disk and B approximately equal to 10 (sup 4-5) G (Gravitational constant) if the wind is driven by magnetocentrifugal acceleration. The MHD estimates are below upper limits predicted by the canonical alpha-disk model. We discuss these results in terms of fundamental disk physics and black hole accretion modes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28888929','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28888929"><span>Longitudinal studies of botulinum toxin in cervical dystonia: Why do patients discontinue therapy?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jinnah, H A; Comella, Cynthia L; Perlmutter, Joel; Lungu, Codrin; Hallett, Mark</p> <p>2018-06-01</p> <p>Numerous studies have established botulinum toxin (BoNT) to be safe and effective for the treatment of cervical dystonia (CD). Despite its well-documented efficacy, there has been growing awareness that a significant proportion of CD patients discontinue therapy. The reasons for discontinuation are only partly understood. This summary describes longitudinal studies that provided information regarding the proportions of patients discontinuing BoNT therapy, and the reasons for discontinuing therapy. The data come predominantly from un-blinded long-term follow-up studies, registry studies, and patient-based surveys. All types of longitudinal studies provide strong evidence that BoNT is both safe and effective in the treatment of CD for many years. Overall, approximately one third of CD patients discontinue BoNT. The most common reason for discontinuing therapy is lack of benefit, often described as primary or secondary non-response. The apparent lack of response is only rarely related to true immune-mediated resistance to BoNT. Other reasons for discontinuing include side effects, inconvenience, cost, or other reasons. Although BoNT is safe and effective in the treatment of the majority of patients with CD, approximately one third discontinue. The increasing awareness of a significant proportion of patients who discontinue should encourage further efforts to optimize administration of BoNT, to improve BoNT preparations to extend duration or reduce side effects, to develop add-on therapies that may mitigate swings in symptom severity, or develop entirely novel treatment approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JChPh.139c4505V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JChPh.139c4505V"><span>The isotropic-nematic phase transition of tangent hard-sphere chain fluids—Pure components</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Westen, Thijs; Oyarzún, Bernardo; Vlugt, Thijs J. H.; Gross, Joachim</p> <p>2013-07-01</p> <p>An extension of Onsager's second virial theory is developed to describe the isotropic-nematic phase transition of tangent hard-sphere chain fluids. Flexibility is introduced by the rod-coil model. The effect of chain-flexibility on the second virial coefficient is described using an accurate, analytical approximation for the orientation-dependent pair-excluded volume. The use of this approximation allows for an analytical treatment of intramolecular flexibility by using a single pure-component parameter. Two approaches to approximate the effect of the higher virial coefficients are considered, i.e., the Vega-Lago rescaling and Scaled Particle Theory (SPT). The Onsager trial function is employed to describe the orientational distribution function. Theoretical predictions for the equation of state and orientational order parameter are tested against the results from Monte Carlo (MC) simulations. For linear chains of length 9 and longer, theoretical results are in excellent agreement with MC data. For smaller chain lengths, small errors introduced by the approximation of the higher virial coefficients become apparent, leading to a small under- and overestimation of the pressure and density difference at the phase transition, respectively. For rod-coil fluids of reasonable rigidity, a quantitative comparison between theory and MC simulations is obtained. For more flexible chains, however, both the Vega-Lago rescaling and SPT lead to a small underestimation of the location of the phase transition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12286169','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12286169"><span>The assumption of equilibrium in models of migration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schachter, J; Althaus, P G</p> <p>1993-02-01</p> <p>In recent articles Evans (1990) and Harrigan and McGregor (1993) (hereafter HM) scrutinized the equilibrium model of migration presented in a 1989 paper by Schachter and Althaus. This model used standard microeconomics to analyze gross interregional migration flows based on the assumption that gross flows are in approximate equilibrium. HM criticized the model as theoretically untenable, while Evans summoned empirical as well as theoretical objections. HM claimed that equilibrium of gross migration flows could be ruled out on theoretical grounds. They argued that the absence of net migration requires that either all regions have equal populations or that unsustainable regional migration propensities must obtain. In fact some moves are inter- and other are intraregional. It does not follow, however, that the number of interregional migrants will be larger for the more populous region. Alternatively, a country could be divided into a large number of small regions that have equal populations. With uniform propensities to move, each of these analytical regions would experience in equilibrium zero net migration. Hence, the condition that net migration equal zero is entirely consistent with unequal distributions of population across regions. The criticisms of Evans were based both on flawed reasoning and on misinterpretation of the results of a number of econometric studies. His reasoning assumed that the existence of demand shifts as found by Goldfarb and Yezer (1987) and Topel (1986) invalidated the equilibrium model. The equilibrium never really obtains exactly, but economic modeling of migration properly begins with a simple equilibrium model of the system. A careful reading of the papers Evans cited in support of his position showed that in fact they affirmed rather than denied the appropriateness of equilibrium modeling. Zero net migration together with nonzero gross migration are not theoretically incompatible with regional heterogeneity of population, wages, or amenities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H41G1440C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H41G1440C"><span>Understanding and predicting changing use of groundwater with climate and other uncertainties: a Bayesian approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Costa, F. A. F.; Keir, G.; McIntyre, N.; Bulovic, N.</p> <p>2015-12-01</p> <p>Most groundwater supply bores in Australia do not have flow metering equipment and so regional groundwater abstraction rates are not well known. Past estimates of unmetered abstraction for regional numerical groundwater modelling typically have not attempted to quantify the uncertainty inherent in the estimation process in detail. In particular, the spatial properties of errors in the estimates are almost always neglected. Here, we apply Bayesian spatial models to estimate these abstractions at a regional scale, using the state-of-the-art computationally inexpensive approaches of integrated nested Laplace approximation (INLA) and stochastic partial differential equations (SPDE). We examine a case study in the Condamine Alluvium aquifer in southern Queensland, Australia; even in this comparatively data-rich area with extensive groundwater abstraction for agricultural irrigation, approximately 80% of bores do not have reliable metered flow records. Additionally, the metering data in this area are characterised by complicated statistical features, such as zero-valued observations, non-normality, and non-stationarity. While this precludes the use of many classical spatial estimation techniques, such as kriging, our model (using the R-INLA package) is able to accommodate these features. We use a joint model to predict both probability and magnitude of abstraction from bores in space and time, and examine the effect of a range of high-resolution gridded meteorological covariates upon the predictive ability of the model. Deviance Information Criterion (DIC) scores are used to assess a range of potential models, which reward good model fit while penalising excessive model complexity. We conclude that maximum air temperature (as a reasonably effective surrogate for evapotranspiration) is the most significant single predictor of abstraction rate; and that a significant spatial effect exists (represented by the SPDE approximation of a Gaussian random field with a Matérn covariance function). Our final model adopts air temperature, solar exposure, and normalized difference vegetation index (NDVI) as covariates, shows good agreement with previous estimates at a regional scale, and additionally offers rigorous quantification of uncertainty in the estimate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930013034','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930013034"><span>Truth-Valued-Flow Inference (TVFI) and its applications in approximate reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wang, Pei-Zhuang; Zhang, Hongmin; Xu, Wei</p> <p>1993-01-01</p> <p>The framework of the theory of Truth-valued-flow Inference (TVFI) is introduced. Even though there are dozens of papers presented on fuzzy reasoning, we think it is still needed to explore a rather unified fuzzy reasoning theory which has the following two features: (1) it is simplified enough to be executed feasibly and easily; and (2) it is well structural and well consistent enough that it can be built into a strict mathematical theory and is consistent with the theory proposed by L.A. Zadeh. TVFI is one of the fuzzy reasoning theories that satisfies the above two features. It presents inference by the form of networks, and naturally views inference as a process of truth values flowing among propositions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5743441','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5743441"><span>In defence of model-based inference in phylogeography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Beaumont, Mark A.; Nielsen, Rasmus; Robert, Christian; Hey, Jody; Gaggiotti, Oscar; Knowles, Lacey; Estoup, Arnaud; Panchal, Mahesh; Corander, Jukka; Hickerson, Mike; Sisson, Scott A.; Fagundes, Nelson; Chikhi, Lounès; Beerli, Peter; Vitalis, Renaud; Cornuet, Jean-Marie; Huelsenbeck, John; Foll, Matthieu; Yang, Ziheng; Rousset, Francois; Balding, David; Excoffier, Laurent</p> <p>2017-01-01</p> <p>Recent papers have promoted the view that model-based methods in general, and those based on Approximate Bayesian Computation (ABC) in particular, are flawed in a number of ways, and are therefore inappropriate for the analysis of phylogeographic data. These papers further argue that Nested Clade Phylogeographic Analysis (NCPA) offers the best approach in statistical phylogeography. In order to remove the confusion and misconceptions introduced by these papers, we justify and explain the reasoning behind model-based inference. We argue that ABC is a statistically valid approach, alongside other computational statistical techniques that have been successfully used to infer parameters and compare models in population genetics. We also examine the NCPA method and highlight numerous deficiencies, either when used with single or multiple loci. We further show that the ages of clades are carelessly used to infer ages of demographic events, that these ages are estimated under a simple model of panmixia and population stationarity but are then used under different and unspecified models to test hypotheses, a usage the invalidates these testing procedures. We conclude by encouraging researchers to study and use model-based inference in population genetics. PMID:29284924</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14663849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14663849"><span>Using multi-class queuing network to solve performance models of e-business sites.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zheng, Xiao-ying; Chen, De-ren</p> <p>2004-01-01</p> <p>Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1357929-stochastic-flow-capturing-model-optimize-location-fast-charging-stations-uncertain-electric-vehicle-flows','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1357929-stochastic-flow-capturing-model-optimize-location-fast-charging-stations-uncertain-electric-vehicle-flows"><span>A stochastic flow-capturing model to optimize the location of fast-charging stations with uncertain electric vehicle flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Wu, Fei; Sioshansi, Ramteen</p> <p>2017-05-04</p> <p>Here, we develop a model to optimize the location of public fast charging stations for electric vehicles (EVs). A difficulty in planning the placement of charging stations is uncertainty in where EV charging demands appear. For this reason, we use a stochastic flow-capturing location model (SFCLM). A sample-average approximation method and an averaged two-replication procedure are used to solve the problem and estimate the solution quality. We demonstrate the use of the SFCLM using a Central-Ohio based case study. We find that most of the stations built are concentrated around the urban core of the region. As the number ofmore » stations built increases, some appear on the outskirts of the region to provide an extended charging network. We find that the sets of optimal charging station locations as a function of the number of stations built are approximately nested. We demonstrate the benefits of the charging-station network in terms of how many EVs are able to complete their daily trips by charging midday—six public charging stations allow at least 60% of EVs that would otherwise not be able to complete their daily tours without the stations to do so. We finally compare the SFCLM to a deterministic model, in which EV flows are set equal to their expected values. We show that if a limited number of charging stations are to be built, the SFCLM outperforms the deterministic model. As the number of stations to be built increases, the SFCLM and deterministic model select very similar station locations.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=aristotelian+AND+theory&pg=6&id=EJ560145','ERIC'); return false;" href="https://eric.ed.gov/?q=aristotelian+AND+theory&pg=6&id=EJ560145"><span>Toward an Aristotelian Model of Teacher Reasoning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Orton, Robert E.</p> <p>1997-01-01</p> <p>Utilizes Aristotle's three-way distinctions between theory, practice, and production to describe a balanced model of teacher reasoning. Reviews differing models of teacher reasoning that emphasize the role of contemplation and subject-matter representations. Uses the Aristotelian model to point toward a normative vision of teacher reasoning. (MJP)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title29-vol3/pdf/CFR-2010-title29-vol3-sec778-217.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title29-vol3/pdf/CFR-2010-title29-vol3-sec778-217.pdf"><span>29 CFR 778.217 - Reimbursement for expenses.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-07-01</p> <p>.... (2) The actual or reasonably approximate amount expended by an employee in purchasing, laundering or... expenses, such as taxicab fares, incurred while traveling on the employer's business. (4) “Supper money”, a...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1261553-low-order-modeling-internal-heat-transfer-biomass-particle-pyrolysis','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1261553-low-order-modeling-internal-heat-transfer-biomass-particle-pyrolysis"><span>Low-order modeling of internal heat transfer in biomass particle pyrolysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.</p> <p>2016-05-11</p> <p>We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1352756','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1352756"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wereszczak, Andrew A.; Emily Cousineau, J.; Bennion, Kevin</p> <p></p> <p>The apparent thermal conductivity of packed copper wire test specimens was measured parallel and perpendicular to the axis of the wire using laser flash, transient plane source, and transmittance test methods. Approximately 50% wire packing efficiency was produced in the specimens using either 670- or 925-μm-diameter copper wires that both had an insulation coating thickness of 37 μm. The interstices were filled with a conventional varnish material and also contained some remnant porosity. The apparent thermal conductivity perpendicular to the wire axis was about 0.5–1 W/mK, whereas it was over 200 W/mK in the parallel direction. The Kanzaki model andmore » an finite element analysis (FEA) model were found to reasonably predict the apparent thermal conductivity perpendicular to the wires but thermal conductivity percolation from nonideal wire-packing may result in their underestimation of it.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1980ccmd.book.....T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1980ccmd.book.....T"><span>Core/corona modeling of diode-imploded annular loads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Terry, R. E.; Guillory, J. U.</p> <p>1980-11-01</p> <p>The effects of a tenuous exterior plasma corona with anomalous resistivity on the compression and heating of a hollow, collisional aluminum z-pinch plasma are predicted by a one-dimensional code. As the interior ("core") plasma is imploded by its axial current, the energy exchange between core and corona determines the current partition. Under the conditions of rapid core heating and compression, the increase in coronal current provides a trade-off between radial acceleration and compression, which reduces the implosion forces and softens the pitch. Combined with a heuristic account of energy and momentum transport in the strongly coupled core plasma and an approximate radiative loss calculation including Al line, recombination and Bremsstrahlung emission, the current model can provide a reasonably accurate description of imploding annular plasma loads that remain azimuthally symmetric. The implications for optimization of generator load coupling are examined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1270780-low-order-modeling-internal-heat-transfer-biomass-particle-pyrolysis','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1270780-low-order-modeling-internal-heat-transfer-biomass-particle-pyrolysis"><span>Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart</p> <p>2016-06-16</p> <p>We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1419477-geant4-evaluation-hornyak-button-two-candidate-detectors-treat-hodoscope','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1419477-geant4-evaluation-hornyak-button-two-candidate-detectors-treat-hodoscope"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark</p> <p></p> <p>The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminatemore » background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Furthermore, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InPhT..88...23X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InPhT..88...23X"><span>Emissivity model of steel 430 during the growth of oxide layer at 800-1100 K and 1.5 μm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xing, Wei; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue</p> <p>2018-01-01</p> <p>This work studied the variation in spectral emissivity with growth of oxide layer at the different temperatures. For this reason, we measured the normal spectral emissivity during the growth of oxide layer on the sample surface at a wavelength of 1.5 μm over a temperature range 800-1100 K. In the experiment, the temperature was measured by the two thermocouples, which were symmetrically welded onto the front surface of specimens. The average of their readings was regarded as the true temperature. The detector should be perpendicular to the specimen surface as accurately as possible. The variation in spectral emissivity with growth of oxide layer was evaluated at a certain temperature. Altogether 11 emissivity models were evaluated. The conclusion was gained that the more the number of parameters used in the models was, the better the fitting accuracy became. On the whole, all the PEE models, the four-parameter LEE model and the five-parameter PFE, PLE and LEE models could be employed to well fit this kind of variation. The variation in spectral emissivity with temperature was determined at a certain thickness of oxide film. Almost all the models studied in this paper could be used to accurately evaluate this variation. The approximate models of spectral emissivity as a function of temperature and oxide-layer thickness were proposed. The strong oscillations of spectral emissivity were observed, which were affirmed to arise from the interference effect between the two radiations stemming from the oxide layer and from the substrate. The uncertainties in the temperature of steel 430 generated only by the surface oxidization were approximately 4.1-10.7 K in this experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/644231-monte-carlo-simulations-medical-imaging-modalities','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/644231-monte-carlo-simulations-medical-imaging-modalities"><span>Monte Carlo simulations of medical imaging modalities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Estes, G.P.</p> <p></p> <p>Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920016856','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920016856"><span>In defense of compilation: A response to Davis' form and content in model-based reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Keller, Richard</p> <p>1990-01-01</p> <p>In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/6501028','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/6501028"><span>Inspiratory flow pattern in humans.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lafortuna, C L; Minetti, A E; Mognoni, P</p> <p>1984-10-01</p> <p>The theoretical estimation of the mechanical work of breathing during inspiration at rest is based on the common assumption that the inspiratory airflow wave is a sine function of time. Different analytical studies have pointed out that from an energetic point of view a rectangular wave is more economical than a sine wave. Visual inspection of inspiratory flow waves recorded during exercise in humans and various animals suggests that a trend toward a rectangular flow wave may be a possible systematic response of the respiratory system. To test this hypothesis, the harmonic content of inspiratory flow waves that were recorded in six healthy subjects at rest, during exercise hyperventilation, and during a maximum voluntary ventilation (MVV) maneuver were evaluated by a Fourier analysis, and the results were compared with those obtained on sinusoidal and rectangular models. The dynamic work inherent in the experimental waves and in the sine-wave model was practically the same at rest; during exercise hyperventilation and MVV, the experimental wave was approximately 16-20% more economical than the sinusoidal one. It was concluded that even though at rest the sinusoidal model is a reasonably good approximation of inspiratory flow, during exercise and MVV, a physiological controller is probably operating in humans that can select a more economical inspiratory pattern. Other peculiarities of airflow wave during hyperventilation and some optimization criteria are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991SPIE.1569..474S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991SPIE.1569..474S"><span>Fuzzy logic and neural networks in artificial intelligence and pattern recognition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanchez, Elie</p> <p>1991-10-01</p> <p>With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OcMod.121...90B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OcMod.121...90B"><span>Linear shoaling of free-surface waves in multi-layer non-hydrostatic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Yefei; Cheung, Kwok Fai</p> <p>2018-01-01</p> <p>The capability to describe shoaling over sloping bottom is fundamental to modeling of coastal wave transformation. The linear shoaling gradient provides a metric to measure this property in non-hydrostatic models with layer-integrated formulations. The governing equations in Boussinesq form facilitate derivation of the linear shoaling gradient, which is in the form of a [ 2 P + 2 , 2 P ] expansion of the water depth parameter kd with P equal to 1 for a one-layer model and (4 N - 4) for an N-layer model. The expansion reproduces the analytical solution from Airy wave theory at the shallow water limit and maintains a reasonable approximation up to kd = 1.2 and 2 for the one and two-layer models. Additional layers provide rapid and monotonic convergence of the shoaling gradient into deep water. Numerical experiments of wave propagation over a plane slope illustrate manifestation of the shoaling errors through the transformation processes from deep to shallow water. Even though outside the zone of active wave transformation, shoaling errors from deep to intermediate water are cumulative to produce appreciable impact to the wave amplitude in shallow water.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26766517','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26766517"><span>Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji</p> <p>2016-07-01</p> <p>This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990103607&hterms=Reasons+Motivation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DReasons%2BMotivation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990103607&hterms=Reasons+Motivation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DReasons%2BMotivation"><span>Physically-Derived Dynamical Cores in Atmospheric General Circulation Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rood, Richard B.; Lin, Shian-Jiann</p> <p>1999-01-01</p> <p>The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990115820&hterms=Reasons+Motivation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DReasons%2BMotivation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990115820&hterms=Reasons+Motivation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DReasons%2BMotivation"><span>Physically-Derived Dynamical Cores in Atmospheric General Circulation Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rood, Richard B.; Lin, Shian-Kiann</p> <p>1999-01-01</p> <p>The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003PhDT........41A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003PhDT........41A"><span>The future of nuclear power: A world-wide perspective</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aktar, Ismail</p> <p></p> <p>This study analyzes the future of commercial nuclear electric generation worldwide using the Environmental Kuznets Curve (EKC) concept. The Tobit panel data estimation technique is applied to analyze the data between 1980 and 1998 for 105 countries. EKC assumes that low-income countries increase their nuclear reliance in total electric production whereas high-income countries decrease their nuclear reliance. Hence, we expect that high-income countries should shut down existing nuclear reactors and/or not build any new ones. We encounter two reasons for shutdowns: economic or political/environmental concerns. To distinguish these two effects, reasons for shut down are also investigated by using the Hazard Model technique. Hence, the load factor of a reactor is used as an approximation for economic reason to shut down the reactor. If a shut downed reactor had high load factor, this could be attributable to political/environmental concern or else economic concern. The only countries with nuclear power are considered in this model. The two data sets are created. In the first data set, the single entry for each reactor is created as of 1998 whereas in the second data set, the multiple entries are created for each reactor beginning from 1980 to 1998. The dependent variable takes 1 if operational or zero if shut downed. The empirical findings provide strong evidence for EKC relationship for commercial nuclear electric generation. Furthermore, higher natural resources suggest alternative electric generation methods rather than nuclear power. Economic index as an institutional variable suggests higher the economic freedom, lower the nuclear electric generation as expected. This model does not support the idea to cut the carbon dioxide emission via increasing nuclear share. The Hazard Model findings suggest that higher the load factor is, less likely the reactor will shut down. However, if it is still permanently closed downed, then this could be attributable to political hostility against nuclear power. There are also some projections indicating which reactors are most/least likely to be shut downed from the logit model. We also project which countries are most likely to increase/decrease their nuclear reliance from the residuals of EKC model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994JPP....10..198B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994JPP....10..198B"><span>Approximate similarity principle for a full-scale STOVL ejector</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barankiewicz, Wendy S.; Perusek, Gail P.; Ibrahim, Mounir B.</p> <p>1994-03-01</p> <p>Full-scale ejector experiments are expensive and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles, in particular the Munk and prim principle for isentropic flow, was explored. Static performance test data for a full-scale thrust augmenting ejector were analyzed for primary flow temperature up to 1560 R. At different primary temperatures, exit pressure contours were compared for similarity. A nondimensional flow parameter is then used to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, properly chosen performance parameters were found to be similar for both flow and cold flow model tests.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20224980','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20224980"><span>The link between rapid enigmatic amphibian decline and the globally emerging chytrid fungus.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lötters, Stefan; Kielgast, Jos; Bielby, Jon; Schmidtlein, Sebastian; Bosch, Jaime; Veith, Michael; Walker, Susan F; Fisher, Matthew C; Rödder, Dennis</p> <p>2009-09-01</p> <p>Amphibians are globally declining and approximately one-third of all species are threatened with extinction. Some of the most severe declines have occurred suddenly and for unknown reasons in apparently pristine habitats. It has been hypothesized that these "rapid enigmatic declines" are the result of a panzootic of the disease chytridiomycosis caused by globally emerging amphibian chytrid fungus. In a Species Distribution Model, we identified the potential distribution of this pathogen. Areas and species from which rapid enigmatic decline are known significantly overlap with those of highest environmental suitability to the chytrid fungus. We confirm the plausibility of a link between rapid enigmatic decline in worldwide amphibian species and epizootic chytridiomycosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19362037','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19362037"><span>Reason, emotion and decision-making: risk and reward computation with feeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Quartz, Steven R</p> <p>2009-05-01</p> <p>Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19750027116&hterms=planets+orbit+sun&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DWhy%2Bplanets%2Borbit%2Bsun','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19750027116&hterms=planets+orbit+sun&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DWhy%2Bplanets%2Borbit%2Bsun"><span>A model for accretion of the terrestrial planets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Weidenschilling, S. J.</p> <p>1974-01-01</p> <p>One possible origin of the terrestrial planets involves their formation by gravitational accretion of particles originally in Keplerian orbits about the sun. Some implications of this theory are considered. A formal expression for the rate of mass accretion by a planet is developed. The formal singularity of the gravitational collision cross section for low relative velocities is shown to be without physical significance when the accreting bodies are in heliocentric orbits. The distribution of particle velocities relative to an accreting planet is considered; the mean velocity increases with time. The internal temperature of an accreting planet is shown to depend simply on the accretion rate. A simple and physically reasonable approximate expression for a planetary accretion rate is proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ChPhB..23f3402Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ChPhB..23f3402Z"><span>Second-order Born calculation of coplanar symmetric (e, 2e) process on Mg</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Yong-Zhi; Wang, Yang; Zhou, Ya-Jun</p> <p>2014-06-01</p> <p>The second-order distorted wave Born approximation (DWBA) method is employed to investigate the triple differential cross sections (TDCS) of coplanar doubly symmetric (e, 2e) collisions for magnesium at excess energies of 6 eV-20 eV. Comparing with the standard first-order DWBA calculations, the inclusion of the second-order Born term in the scattering amplitude improves the degree of agreement with experiments, especially for backward scattering region of TDCS. This indicates that the present second-order Born term is capable to give a reasonable correction to DWBA model in studying coplanar symmetric (e, 2e) problems of two-valence-electron target in low energy range.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26964112','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26964112"><span>Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hongbo Wang; Kosuge, K</p> <p>2012-01-01</p> <p>Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920055502&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dberenji','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920055502&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dberenji"><span>A reinforcement learning-based architecture for fuzzy logic control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berenji, Hamid R.</p> <p>1992-01-01</p> <p>This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA285331','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA285331"><span>Nonlinear Ocean Waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1994-09-30</p> <p>equation due to Kadomtsev & Petviashvili (1970), Dx(atu + 6 ui)u + a8 3U) + 3 ay2u = 0, (KP) is known to describe approximately the evolution of...to be stable to perturbations, and their amplitudes need not be small. The Kadomtsev - Petviashvili (KP) equation is known to describe approximately the...predicted with reasonable accuracy by a family of exact solutions of an equation due to Kadomtsev and Petviashvili (1970): (ft + 6 ffx + f )x + 3fyy</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA573684','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA573684"><span>Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2010-01-01</p> <p>and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1987SPIE..851..141D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1987SPIE..851..141D"><span>Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dodds, David R.</p> <p>1987-10-01</p> <p>Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSMSE..26a5010S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSMSE..26a5010S"><span>The influence of anisotropy on the core structure of Shockley partial dislocations within FCC materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Szajewski, B. A.; Hunter, A.; Luscher, D. J.; Beyerlein, I. J.</p> <p>2018-01-01</p> <p>Both theoretical and numerical models of dislocations often necessitate the assumption of elastic isotropy to retain analytical tractability in addition to reducing computational load. As dislocation based models evolve towards physically realistic material descriptions, the assumption of elastic isotropy becomes increasingly worthy of examination. We present an analytical dislocation model for calculating the full dissociated core structure of dislocations within anisotropic face centered cubic (FCC) crystals as a function of the degree of material elastic anisotropy, two misfit energy densities on the γ-surface ({γ }{{isf}}, {γ }{{usf}}) and the remaining elastic constants. Our solution is independent of any additional features of the γ-surface. Towards this pursuit, we first demonstrate that the dependence of the anisotropic elasticity tensor on the orientation of the dislocation line within the FCC crystalline lattice is small and may be reasonably neglected for typical materials. With this approximation, explicit analytic solutions for the anisotropic elasticity tensor {B} for both nominally edge and screw dislocations within an FCC crystalline lattice are devised, and employed towards defining a set of effective isotropic elastic constants which reproduce fully anisotropic results, however do not retain the bulk modulus. Conversely, Hill averaged elastic constants which both retain the bulk modulus and reasonably approximate the dislocation core structure are employed within subsequent numerical calculations. We examine a wide range of materials within this study, and the features of each partial dislocation core are sufficiently localized that application of discrete linear elasticity accurately describes the separation of each partial dislocation core. In addition, the local features (the partial dislocation core distribution) are well described by a Peierls-Nabarro dislocation model. We develop a model for the displacement profile which depends upon two disparate dislocation length scales which describe the core structure; (i) the equilibrium stacking fault width between two Shockley partial dislocations, R eq and (ii) the maximum slip gradient, χ, of each Shockley partial dislocation. We demonstrate excellent agreement between our own analytic predictions, numerical calculations, and R eq computed directly by both ab-initio and molecular statics methods found elsewhere within the literature. The results suggest that understanding of various plastic mechanisms, e.g., cross-slip and nucleation may be augmented with the inclusion of elastic anisotropy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Sc%26Ed..26.1001D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Sc%26Ed..26.1001D"><span>Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Develaki, Maria</p> <p>2017-11-01</p> <p>Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19690000299','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19690000299"><span>Buckling Of Shells Of Revolution /BOSOR/ with various wall constructions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Almroth, B. O.; Bushnell, D.; Sobel, L. H.</p> <p>1969-01-01</p> <p>Computer program, using numerical integration and finite difference techniques, solves almost any buckling problem for shells exhibiting orthotropic behavior. Stability analyses can be performed with reasonable accuracy and without unduly restrictive approximations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H33E1733E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H33E1733E"><span>Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Engdahl, N.</p> <p>2017-12-01</p> <p>Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=coastal+AND+zone&pg=4&id=ED173078','ERIC'); return false;" href="https://eric.ed.gov/?q=coastal+AND+zone&pg=4&id=ED173078"><span>The Coastal Zone: Man and Nature. An Application of the Socio-Scientific Reasoning Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Maul, June Paradise; And Others</p> <p></p> <p>The curriculum model described here has been designed by incorporating the socio-scientific reasoning model with a simulation design in an attempt to have students investigate the onshore impacts of Outer Continental Shelf (OCS) gas and oil development. The socio-scientific reasoning model incorporates a logical/physical reasoning component as…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3766254','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3766254"><span>Analyzing the impact of social factors on homelessness: a Fuzzy Cognitive Map approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background The forces which affect homelessness are complex and often interactive in nature. Social forces such as addictions, family breakdown, and mental illness are compounded by structural forces such as lack of available low-cost housing, poor economic conditions, and insufficient mental health services. Together these factors impact levels of homelessness through their dynamic relations. Historic models, which are static in nature, have only been marginally successful in capturing these relationships. Methods Fuzzy Logic (FL) and fuzzy cognitive maps (FCMs) are particularly suited to the modeling of complex social problems, such as homelessness, due to their inherent ability to model intricate, interactive systems often described in vague conceptual terms and then organize them into a specific, concrete form (i.e., the FCM) which can be readily understood by social scientists and others. Using FL we converted information, taken from recently published, peer reviewed articles, for a select group of factors related to homelessness and then calculated the strength of influence (weights) for pairs of factors. We then used these weighted relationships in a FCM to test the effects of increasing or decreasing individual or groups of factors. Results of these trials were explainable according to current empirical knowledge related to homelessness. Results Prior graphic maps of homelessness have been of limited use due to the dynamic nature of the concepts related to homelessness. The FCM technique captures greater degrees of dynamism and complexity than static models, allowing relevant concepts to be manipulated and interacted. This, in turn, allows for a much more realistic picture of homelessness. Through network analysis of the FCM we determined that Education exerts the greatest force in the model and hence impacts the dynamism and complexity of a social problem such as homelessness. Conclusions The FCM built to model the complex social system of homelessness reasonably represented reality for the sample scenarios created. This confirmed that the model worked and that a search of peer reviewed, academic literature is a reasonable foundation upon which to build the model. Further, it was determined that the direction and strengths of relationships between concepts included in this map are a reasonable approximation of their action in reality. However, dynamic models are not without their limitations and must be acknowledged as inherently exploratory. PMID:23971944</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JPhD...46U5206S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JPhD...46U5206S"><span>Characterization and global modelling of low-pressure hydrogen-based RF plasmas suitable for surface cleaning processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Škoro, Nikola; Puač, Nevena; Lazović, Saša; Cvelbar, Uroš; Kokkoris, George; Gogolides, Evangelos</p> <p>2013-11-01</p> <p>In this paper we present results of measurements and global modelling of low-pressure inductively coupled H2 plasma which is suitable for surface cleaning applications. The plasma is ignited at 1 Pa in a helicon-type reactor and is characterized using optical emission measurements (optical actinometry) and electrical measurements, namely Langmuir and catalytic probe. By comparing catalytic probe data obtained at the centre of the chamber with optical actinometry results, an approximate calibration of the actinometry method as a semi-quantititative measure of H density was achieved. Coefficients for conversion of actinometric ratios to H densities are tabulated and provided. The approximate validity region of the simple actinometry formula for low-pressure H2 plasma is discussed in the online supplementary data (stacks.iop.org/JPhysD/46/475206/mmedia). Best agreement with catalytic probe results was obtained for (Hβ, Ar750) and (Hβ, Ar811) actinometric line pairs. Additionally, concentrations of electrons and ions as well as plasma potential, electron temperature and ion fluxes were measured in the chamber centre at different plasma powers using a Langmuir probe. Moreover, a global model of an inductively coupled plasma was formulated using a compiled reaction set for H2/Ar gas mixture. The model results compared reasonably well with the results on H atom and charge particle densities and a sensitivity analysis of important input parameters was conducted. The influence of the surface recombination, ionization, and dissociation coefficients, and the ion-neutral collision cross-section on model results was demonstrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3650913','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3650913"><span>VAPOR-PHASE TRANSPORT OF TRICHLOROETHENE IN AN INTERMEDIATE-SCALE VADOSE-ZONE SYSTEM: RETENTION PROCESSES AND TRACER-BASED PREDICTION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Costanza-Robinson, Molly S.; Carlson, Tyson D.; Brusseau, Mark L.</p> <p>2013-01-01</p> <p>Gas-phase miscible-displacement experiments were conducted using a large weighing lysimeter to evaluate retention processes for volatile organic compounds (VOCs) in water-unsaturated (vadoze-zone) systems, and to test the utility of gas-phase tracers for predicting VOC retardation. Trichloroethene (TCE) served as a model VOC, while trichlorofluoromethane (CFM) and heptane were used as partitioning tracers to independently characterize retention by water and the air-water interface, respectively. Retardation factors for TCE ranged between 1.9 and 3.5, depending on water content. The results indicate that dissolution into the bulk water was the primary retention mechanism for TCE under all conditions studied, contributing approximately two thirds of the total measured retention. Accumulation at the air-water interface comprised a significant fraction of the observed retention for all experiments, with an average contribution of approximately 24%. Sorption to the solid phase contributed approximately 10% to retention. Water contents and air-water interfacial areas estimated based on the CFM and heptane tracer data, respectively, were similar to independently measured values. Retardation factors for TCE predicted using the partitioning-tracer data were in reasonable agreement with the measured values. These results suggest that gas-phase tracer tests hold promise for characterizing the retention and transport of VOCs in the vadose-zone. PMID:23333418</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.824a2032S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.824a2032S"><span>Models for Train Passenger Forecasting of Java and Sumatra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sartono</p> <p>2017-04-01</p> <p>People tend to take public transportation to avoid high traffic, especially in Java. In Jakarta, the number of railway passengers is over than the capacity of the train at peak time. This is an opportunity as well as a challenge. If it is managed well then the company can get high profit. Otherwise, it may lead to disaster. This article discusses models for the train passengers, hence, finding the reasonable models to make a prediction overtimes. The Box-Jenkins method is occupied to develop a basic model. Then, this model is compared to models obtained using exponential smoothing method and regression method. The result shows that Holt-Winters model is better to predict for one-month, three-month, and six-month ahead for the passenger in Java. In addition, SARIMA(1,1,0)(2,0,0) is more accurate for nine-month and twelve-month oversee. On the other hand, for Sumatra passenger forecasting, SARIMA(1,1,1)(0,0,2) gives a better approximation for one-month ahead, and ARIMA model is best for three-month ahead prediction. The rest, Trend Seasonal and Liner Model has the least of RMSE to forecast for six-month, nine-month, and twelve-month ahead.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880015862','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880015862"><span>Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Banks, H. T.; Reich, Simeon; Rosen, I. G.</p> <p>1988-01-01</p> <p>An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17286472','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17286472"><span>Simple systematization of vibrational excitation cross-section calculations for resonant electron-molecule scattering in the boomerang and impulse models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sarma, Manabendra; Adhikari, S; Mishra, Manoj K</p> <p>2007-01-28</p> <p>Vibrational excitation (nu(f)<--nu(i)) cross-sections sigma(nu(f)<--nu(i) )(E) in resonant e-N(2) and e-H(2) scattering are calculated from transition matrix elements T(nu(f),nu(i) )(E) obtained using Fourier transform of the cross correlation function <phi(nu(f) )(R)|psi(nu(i))(R,t)>, where psi(nu(i))(R,t) approximately =e(-iH(A(2))-(R)t/h phi(nu(i))(R) with time evolution under the influence of the resonance anionic Hamiltonian H(A(2) (-))(A(2) (-)=N(2)(-)/H(2) (-)) implemented using Lanczos and fast Fourier transforms. The target (A(2)) vibrational eigenfunctions phi(nu(i))(R) and phi(nu(f))(R) are calculated using Fourier grid Hamiltonian method applied to potential energy (PE) curves of the neutral target. Application of this simple systematization to calculate vibrational structure in e-N(2) and e-H(2) scattering cross-sections provides mechanistic insights into features underlying presence/absence of structure in e-N(2) and e-H(2) scattering cross-sections. The results obtained with approximate PE curves are in reasonable agreement with experimental/calculated cross-section profiles, and cross correlation functions provide a simple demarcation between the boomerang and impulse models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140000500','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140000500"><span>A CFD Database for Airfoils and Wings at Post-Stall Angles of Attack</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petrilli, Justin; Paul, Ryan; Gopalarathnam, Ashok; Frink, Neal T.</p> <p>2013-01-01</p> <p>This paper presents selected results from an ongoing effort to develop an aerodynamic database from Reynolds-Averaged Navier-Stokes (RANS) computational analysis of airfoils and wings at stall and post-stall angles of attack. The data obtained from this effort will be used for validation and refinement of a low-order post-stall prediction method developed at NCSU, and to fill existing gaps in high angle of attack data in the literature. Such data could have potential applications in post-stall flight dynamics, helicopter aerodynamics and wind turbine aerodynamics. An overview of the NASA TetrUSS CFD package used for the RANS computational approach is presented. Detailed results for three airfoils are presented to compare their stall and post-stall behavior. The results for finite wings at stall and post-stall conditions focus on the effects of taper-ratio and sweep angle, with particular attention to whether the sectional flows can be approximated using two-dimensional flow over a stalled airfoil. While this approximation seems reasonable for unswept wings even at post-stall conditions, significant spanwise flow on stalled swept wings preclude the use of two-dimensional data to model sectional flows on swept wings. Thus, further effort is needed in low-order aerodynamic modeling of swept wings at stalled conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26140293','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26140293"><span>Genetic evolution, plasticity, and bet-hedging as adaptive responses to temporally autocorrelated fluctuating selection: A quantitative genetic model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tufto, Jarle</p> <p>2015-08-01</p> <p>Adaptive responses to autocorrelated environmental fluctuations through evolution in mean reaction norm elevation and slope and an independent component of the phenotypic variance are analyzed using a quantitative genetic model. Analytic approximations expressing the mutual dependencies between all three response modes are derived and solved for the joint evolutionary outcome. Both genetic evolution in reaction norm elevation and plasticity are favored by slow temporal fluctuations, with plasticity, in the absence of microenvironmental variability, being the dominant evolutionary outcome for reasonable parameter values. For fast fluctuations, tracking of the optimal phenotype through genetic evolution and plasticity is limited. If residual fluctuations in the optimal phenotype are large and stabilizing selection is strong, selection then acts to increase the phenotypic variance (bet-hedging adaptive). Otherwise, canalizing selection occurs. If the phenotypic variance increases with plasticity through the effect of microenvironmental variability, this shifts the joint evolutionary balance away from plasticity in favor of genetic evolution. If microenvironmental deviations experienced by each individual at the time of development and selection are correlated, however, more plasticity evolves. The adaptive significance of evolutionary fluctuations in plasticity and the phenotypic variance, transient evolution, and the validity of the analytic approximations are investigated using simulations. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920004903','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920004903"><span>Hierarchic models for laminated plates. Ph.D. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Actis, Ricardo Luis</p> <p>1991-01-01</p> <p>Structural plates and shells are three-dimensional bodies, one dimension of which happens to be much smaller than the other two. Thus, the quality of a plate or shell model must be judged on the basis of how well its exact solution approximates the corresponding three-dimensional problem. Of course, the exact solution depends not only on the choice of the model but also on the topology, material properties, loading and constraints. The desired degree of approximation depends on the analyst's goals in performing the analysis. For these reasons models have to be chosen adaptively. Hierarchic sequences of models make adaptive selection of the model which is best suited for the purposes of a particular analysis possible. The principles governing the formulation of hierarchic models for laminated plates are presented. The essential features of the hierarchic models described models are: (1) the exact solutions corresponding to the hierarchic sequence of models converge to the exact solution of the corresponding problem of elasticity for a fixed laminate thickness; and (2) the exact solution of each model converges to the same limit as the exact solution of the corresponding problem of elasticity with respect to the laminate thickness approaching zero. The formulation is based on one parameter (beta) which characterizes the hierarchic sequence of models, and a set of constants whose influence was assessed by a numerical sensitivity study. The recommended selection of these constants results in the number of fields increasing by three for each increment in the power of beta. Numerical examples analyzed with the proposed sequence of models are included and good correlation with the reference solutions was found. Results were obtained for laminated strips (plates in cylindrical bending) and for square and rectangular plates with uniform loading and with homogeneous boundary conditions. Cross-ply and angle-ply laminates were evaluated and the results compared with those of MSC/PROBE. Hierarchic models make the computation of any engineering data possible to an arbitrary level of precision within the framework of the theory of elasticity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.513e2030C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.513e2030C"><span>Evaluating Predictive Models of Software Quality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.</p> <p>2014-06-01</p> <p>Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2011-04-22/pdf/2011-9863.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2011-04-22/pdf/2011-9863.pdf"><span>76 FR 22724 - Notice of Public Meeting of the Carrizo Plain National Monument Advisory Council</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2011-04-22</p> <p>... School, located approximately 2 miles northwest of Soda Lake Road on Highway 58. The meeting will begin... special assistance such as sign language interpretation or other reasonable accommodations should contact...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930091404','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930091404"><span>The Torsion of Members Having Sections Common in Aircraft Construction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Trayer, George W; March, H W</p> <p>1930-01-01</p> <p>Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940029702','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940029702"><span>An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bey, Kim S.</p> <p>1994-01-01</p> <p>This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=educacion+AND+especial&pg=2&id=ED329661','ERIC'); return false;" href="https://eric.ed.gov/?q=educacion+AND+especial&pg=2&id=ED329661"><span>Hispanic Youth--Dropout Prevention. Report of the Task Force on the Participation of Hispanic Students in Vocational Education Programs = La Joventud Hispana. Reporte del Grupo Especial. La Investigacion de la Participacion de los Estudiantes Hispanos en la Educacion Relativa a la Vocacion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Idaho State Dept. of Education, Boise. Div. of Vocational Education.</p> <p></p> <p>An Idaho task force of Hispanic Americans, industry representatives, and education leaders studied the reasons Hispanic students were not enrolling in and completing vocational education programs. The task force sponsored a series of community meetings to identify reasons and solutions. Approximately 40-60 parents, students, and other interested…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1023204','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1023204"><span>100-NR-2 Apatite Treatability Test: Fall 2010 Tracer Infiltration Test (White Paper)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vermeul, Vincent R.; Fritz, Brad G.; Fruchter, Jonathan S.</p> <p></p> <p>The primary objectives of the tracer infiltration test were to 1) determine whether field-scale hydraulic properties for the compacted roadbed materials and underlying Hanford fm. sediments comprising the zone of water table fluctuation beneath the site are consistent with estimates based laboratory-scale measurements on core samples and 2) characterize wetting front advancement and distribution of soil moisture achieved for the selected application rate. These primary objectives were met. The test successfully demonstrated that 1) the remaining 2 to 3 ft of compacted roadbed material below the infiltration gallery does not limit infiltration rates to levels that would be expected tomore » eliminate near surface application as a viable amendment delivery approach and 2) the combined aqueous and geophysical monitoring approaches employed at this site, with some operational adjustments based on lessons learned, provides an effective means of assessing wetting front advancement and the distribution of soil moisture achieved for a given solution application. Reasonably good agreement between predicted and observed tracer and moisture front advancement rates was observed. During the first tracer infiltration test, which used a solution application rate of 0.7 cm/hr, tracer arrivals were observed at the water table (10 to 12 ft below the bottom of the infiltration gallery) after approximately 5 days, for an advancement rate of approximately 2 ft/day. This advancement rate is generally consistent with pre-test modeling results that predicted tracer arrival at the water table after approximately 5 days (see Figure 8, bottom left panel). This agreement indicates that hydraulic property values specified in the model for the compacted roadbed materials and underlying Hanford formation sediments, which were based on laboratory-scale measurements, are reasonable estimates of actual field-scale conditions. Additional work is needed to develop a working relationship between resistivity change and the associated change in moisture content so that 4D images of moisture content change can be generated. Results from this field test will be available for any future Ca-citrate-PO4 amendment infiltration tests, which would be designed to evaluate the efficacy of using near surface application of amendments to form apatite mineral phases in the upper portion of the zone of water table fluctuation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUFM.S31B1049T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUFM.S31B1049T"><span>Accuracy & Computational Considerations for Wide--Angle One--way Seismic Propagators and Multiple Scattering by Invariant Embedding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thomson, C. J.</p> <p>2004-12-01</p> <p>Pseudodifferential operators (PSDOs) yield in principle exact one--way seismic wave equations, which are attractive both conceptually and for their promise of computational efficiency. The one--way operators can be extended to include multiple--scattering effects, again in principle exactly. In practice approximations must be made and, as an example, the variable--wavespeed Helmholtz equation for scalar waves in two space dimensions is here factorized to give the one--way wave equation. This simple case permits clear identification of a sequence of physically reasonable approximations to be used when the mathematically exact PSDO one--way equation is implemented on a computer. As intuition suggests, these approximations hinge on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow--angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so--called ``standard--ordering'' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane--wave synthesis lying at the heart of the calculations. The decision on whether a slow or a fast Fourier transform code should be used rests upon how many lateral model parameters are truly distinct. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one--way propagator for the laterally varying case, representing the intuitive extension of classical integral--transform solutions for a laterally homogeneous medium. This exponential propagator suggests the use of larger discrete step sizes, and it can also be used to approach phase--screen like approximations (though the latter are not the main interest here). Numerical comparisons with finite--difference solutions will be presented in order to assess the approximations being made and to gain an understanding of computation time differences. The ideas described extend to the three--dimensional, generally anisotropic case and to multiple scattering by invariant embedding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRC..123.1354C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRC..123.1354C"><span>Breakpoint Forcing Revisited: Phase Between Forcing and Response</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Contardo, S.; Symonds, G.; Dufois, F.</p> <p>2018-02-01</p> <p>Using the breakpoint forcing model, for long wave generation in the surf zone, expressions for the phase difference between the breakpoint-forced long waves and the incident short wave groups are obtained. Contrary to assumptions made in previous studies, the breakpoint-forced long waves and incident wave groups are not in phase and outgoing breakpoint-forced long waves and incident wave groups are not π out of phase. The phase between the breakpoint-forced long wave and the incident wave group is shown to depend on beach geometry and wave group parameters. The breakpoint-forced incoming long wave lags behind the wave group, by a phase smaller than π/2. The phase lag decreases as the beach slope decreases and the group frequency increases, approaching approximately π/16 within reasonable limits of the parameter space. The phase between the breakpoint-forced outgoing long wave and the wave group is between π/2 and π and it increases as the beach slope decreases and the group frequency increases, approaching 15π/16 within reasonable limits of the parameter space. The phase between the standing long wave (composed of the incoming long wave and its reflection) and the incident wave group tends to zero when the wave group is long compared to the surf zone width. These results clarify the phase relationships in the breakpoint forcing model and provide a new base for the identification of breakpoint forcing signal from observations, laboratory experiments and numerical modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27144490','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27144490"><span>Missed doses of oral antihyperglycemic medications in US adults with type 2 diabetes mellitus: prevalence and self-reported reasons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vietri, Jeffrey T; Wlodarczyk, Catherine S; Lorenzo, Rose; Rajpathak, Swapnil</p> <p>2016-09-01</p> <p>Adherence to antihyperglycemic medication is thought to be suboptimal, but the proportion of patients missing doses, the number of doses missed, and reasons for missing are not well described. This survey was conducted to estimate the prevalence of and reasons for missed doses of oral antihyperglycemic medications among US adults with type 2 diabetes mellitus, and to explore associations between missed doses and health outcomes. The study was a cross-sectional patient survey. Respondents were contacted via a commercial survey panel and completed an on-line questionnaire via the Internet. Respondents provided information about their use of oral antihyperglycemic medications including doses missed in the prior 4 weeks, personal characteristics, and health outcomes. Weights were calculated to project the prevalence to the US adult population with type 2 diabetes mellitus. Outcomes were compared according to number of doses missed in the past 4 weeks using bivariate statistics and generalized linear models. Approximately 30% of adult patients with type 2 diabetes mellitus reported missing or reducing ≥1 dose of oral antihyperglycemic medication in the prior 4 weeks. Accidental missing was more commonly reported than purposeful skipping, with forgetting the most commonly reported reason. The timing of missed doses suggested respondents had also forgotten about doses missed, so the prevalence of missed doses is likely higher than reported. Outcomes were poorer among those who reported missing three or more doses in the prior 4 weeks. A substantial number of US adults with type 2 diabetes mellitus miss doses of their oral antihyperglycemic medications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19770006284','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19770006284"><span>Environmental Effects of Space Shuttle Solid Rocket Motor Exhaust Plumes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hwang, B.; Pergament, H. S.</p> <p>1976-01-01</p> <p>The deposition of NOx and HCl in the stratosphere from the space shuttle solid rocket motors (SRM) and exhaust plume is discussed. A detailed comparison between stratospheric deposition rates using the baseline SRM propellant and an alternate propellant, which replaces ammonium perchlorate by ammonium nitrate, shows the total NOx deposition rate to be approximately the same for each propellant. For both propellants the ratio of the deposition rates of NOx to total chlorine-containing species is negligibly small. Rocket exhaust ground cloud transport processes in the troposphere are also examined. A brief critique of the multilayer diffusion models (presently used for predicting pollutant deposition in the troposphere) is presented, and some detailed cloud rise calculations are compared with data for Titan 3C launches. The results show that, when launch time meteorological data are used as input, the model can reasonably predict measured cloud stabilization heights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1352756-anisotropic-thermal-response-packed-copper-wire','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1352756-anisotropic-thermal-response-packed-copper-wire"><span>Anisotropic Thermal Response of Packed Copper Wire</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Wereszczak, Andrew A.; Emily Cousineau, J.; Bennion, Kevin; ...</p> <p>2017-04-19</p> <p>The apparent thermal conductivity of packed copper wire test specimens was measured parallel and perpendicular to the axis of the wire using laser flash, transient plane source, and transmittance test methods. Approximately 50% wire packing efficiency was produced in the specimens using either 670- or 925-μm-diameter copper wires that both had an insulation coating thickness of 37 μm. The interstices were filled with a conventional varnish material and also contained some remnant porosity. The apparent thermal conductivity perpendicular to the wire axis was about 0.5–1 W/mK, whereas it was over 200 W/mK in the parallel direction. The Kanzaki model andmore » an finite element analysis (FEA) model were found to reasonably predict the apparent thermal conductivity perpendicular to the wires but thermal conductivity percolation from nonideal wire-packing may result in their underestimation of it.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JPhD...44e5201Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JPhD...44e5201Y"><span>Numerical simulation of an oxygen-fed wire-to-cylinder negative corona discharge in the glow regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yanallah, K.; Pontiga, F.; Castellanos, A.</p> <p>2011-02-01</p> <p>Negative glow corona discharge in flowing oxygen has been numerically simulated for a wire-to-cylinder electrode geometry. The corona discharge is modelled using a fluid approximation. The radial and axial distributions of charged and neutral species are obtained by solving the corresponding continuity equations, which include the relevant plasma-chemical kinetics. Continuity equations are coupled with Poisson's equation and the energy conservation equation, since the reaction rate constants may depend on the electric field and temperature. The experimental values of the current-voltage characteristic are used as input data into the numerical calculations. The role played by different reactions and chemical species is analysed, and the effect of electrical and geometrical parameters on ozone generation is investigated. The reliability of the numerical model is verified by the reasonable agreement between the numerical predictions of ozone concentration and the experimental measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4399987','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4399987"><span>Exploring the Factor Structure of Neurocognitive Measures in Older Individuals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno</p> <p>2015-01-01</p> <p>Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150016545','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150016545"><span>Simulation of Cold Flow in a Truncated Ideal Nozzle with Film Cooling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Braman, K. E.; Ruf, J. H.</p> <p>2015-01-01</p> <p>Flow transients during rocket start-up and shut-down can lead to significant side loads on rocket nozzles. The capability to estimate these side loads computationally can streamline the nozzle design process. Towards this goal, the flow in a truncated ideal contour (TIC) nozzle has been simulated using RANS and URANS for a range of nozzle pressure ratios (NPRs) aimed to match a series of cold flow experiments performed at the NASA MSFC Nozzle Test Facility. These simulations were performed with varying turbulence model choices and for four approximations of the supersonic film injection geometry, each of which was created with a different simplification of the test article geometry. The results show that although a reasonable match to experiment can be obtained with varying levels of geometric fidelity, the modeling choices made do not fully represent the physics of flow separation in a TIC nozzle with film cooling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ARep...61..560I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ARep...61..560I"><span>Features of the accretion in the EX Hydrae system: Results of numerical simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Isakova, P. B.; Zhilkin, A. G.; Bisikalo, D. V.; Semena, A. N.; Revnivtsev, M. G.</p> <p>2017-07-01</p> <p>A two-dimensional numerical model in the axisymmetric approximation that describes the flow structure in the magnetosphere of the white dwarf in the EX Hya system has been developed. Results of simulations show that the accretion in EX Hya proceeds via accretion columns, which are not closed and have curtain-like shapes. The thickness of the accretion curtains depends only weakly on the thickness of the accretion disk. This thickness developed in the simulations does not agree with observations. It is concluded that the main reason for the formation of thick accretion curtains in the model is the assumption that the magnetic field penetrates fully into the plasma of the disk. An analysis based on simple estimates shows that a diamagnetic disk that fully or partially shields the magnetic field of the star may be a more attractive explanation for the observed features of the accretion in EX Hya.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760011126','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760011126"><span>Spacecraft self-contamination due to back-scattering of outgas products</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Robertson, S. J.</p> <p>1976-01-01</p> <p>The back-scattering of outgas contamination near an orbiting spacecraft due to intermolecular collisions was analyzed. Analytical tools were developed for making reasonably accurate quantitative estimates of the outgas contamination return flux, given a knowledge of the pertinent spacecraft and orbit conditions. Two basic collision mechanisms were considered: (1) collisions involving only outgas molecules (self-scattering) and (2) collisions between outgas molecules and molecules in the ambient atmosphere (ambient-scattering). For simplicity, the geometry was idealized to a uniformly outgassing sphere and to a disk oriented normal to the freestream. The method of solution involved an integration of an approximation of the Boltzmann kinetic equation known as the BGK (or Krook) model equation. Results were obtained in the form of simple equations relating outgas return flux to spacecraft and orbit parameters. Results were compared with previous analyses based on more simplistic models of the collision processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/914166-do-reuss-voigt-bounds-really-bound-high-pressure-rheology-experiments','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/914166-do-reuss-voigt-bounds-really-bound-high-pressure-rheology-experiments"><span>Do Reuss and Voigt Bounds Really Bound in High-Pressure Rheology Experiments?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen,J.; Li, L.; Yu, T.</p> <p>2006-01-01</p> <p>Energy dispersive synchrotron x-ray diffraction is carried out to measure differential lattice strains in polycrystalline Fe{sub 2}SiO{sub 4} (fayalite) and MgO samples using a multi-element solid state detector during high-pressure deformation. The theory of elastic modeling with Reuss (iso-stress) and Voigt (iso-strain) bounds is used to evaluate the aggregate stress and weight parameter, {alpha} (0{le}{alpha}{le}1), of the two bounds. Results under the elastic assumption quantitatively demonstrate that a highly stressed sample in high-pressure experiments reasonably approximates to an iso-stress state. However, when the sample is plastically deformed, the Reuss and Voigt bounds are no longer valid ({alpha} becomes beyond 1).more » Instead, if plastic slip systems of the sample are known (e.g. in the case of MgO), the aggregate property can be modeled using a visco-plastic self-consistent theory.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27592412','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27592412"><span>Inferring mass in complex scenes by mental simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hamrick, Jessica B; Battaglia, Peter W; Griffiths, Thomas L; Tenenbaum, Joshua B</p> <p>2016-12-01</p> <p>After observing a collision between two boxes, you can immediately tell which is empty and which is full of books based on how the boxes moved. People form rich perceptions about the physical properties of objects from their interactions, an ability that plays a crucial role in learning about the physical world through our experiences. Here, we present three experiments that demonstrate people's capacity to reason about the relative masses of objects in naturalistic 3D scenes. We find that people make accurate inferences, and that they continue to fine-tune their beliefs over time. To explain our results, we propose a cognitive model that combines Bayesian inference with approximate knowledge of Newtonian physics by estimating probabilities from noisy physical simulations. We find that this model accurately predicts judgments from our experiments, suggesting that the same simulation mechanism underlies both peoples' predictions and inferences about the physical world around them. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950045576&hterms=recycling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Drecycling','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950045576&hterms=recycling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Drecycling"><span>A mechanism for crustal recycling on Venus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lenardic, A.; Kaula, W. M.; Bindschadler, D. L.</p> <p>1993-01-01</p> <p>Entrainment of lower crust by convective mantle downflows is proposed as a crustal recycling mechanism on Venus. The mechanism is characterized by thin sheets of crust being pulled into the mantle by viscous flow stresses. Finite element models of crust/mantle interaction are used to explore tectonic conditions under which crustal entrainment may occur. The recycling scenarios suggested by the numerical models are analogous to previously studied problems for which analytic and experimental relationships assessing entrainment rates have been derived. We use these relationships to estimate crustal recycling rates on Venus. Estimated rates are largely determined by (1) strain rate at the crust/mantle interface (higher strain rate leads to greater entrainment); and (2) effective viscosity of the lower crust (viscosity closer to that of mantle lithosphere leads to greater entrainment). Reasonable geologic strain rates and available crustal flow laws suggest entrainment can recycle approximately equal 1 cu km of crust per year under favorable conditions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930013040','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930013040"><span>Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berenji, Hamid R.; Castellano, Timothy</p> <p>1991-01-01</p> <p>The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcAau.145...83L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcAau.145...83L"><span>A simple orbit-attitude coupled modelling method for large solar power satellites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi</p> <p>2018-04-01</p> <p>A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18774136','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18774136"><span>A modified PMMA cement (Sub-cement) for accelerated fatigue testing of cemented implant constructs using cadaveric bone.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Race, Amos; Miller, Mark A; Mann, Kenneth A</p> <p>2008-10-20</p> <p>Pre-clinical screening of cemented implant systems could be improved by modeling the longer-term response of the implant/cement/bone construct to cyclic loading. We formulated bone cement with degraded fatigue fracture properties (Sub-cement) such that long-term fatigue could be simulated in short-term cadaver tests. Sub-cement was made by adding a chain-transfer agent to standard polymethylmethacrylate (PMMA) cement. This reduced the molecular weight of the inter-bead matrix without changing reaction-rate or handling characteristics. Static mechanical properties were approximately equivalent to normal cement. Over a physiologically reasonable range of stress-intensity factor, fatigue crack propagation rates for Sub-cement were higher by a factor of 25+/-19. When tested in a simplified 2 1/2-D physical model of a stem-cement-bone system, crack growth from the stem was accelerated by a factor of 100. Sub-cement accelerated both crack initiation and growth rate. Sub-cement is now being evaluated in full stem/cement/femur models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19730012205','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19730012205"><span>Approximate thermochemical tables for some C-H and C-H-O species</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bahn, G. S.</p> <p>1973-01-01</p> <p>Approximate thermochemical tables are presented for some C-H and C-H-O species and for some ionized species, supplementing the JANAF Thermochemical Tables for application to finite-chemical-kinetics calculations. The approximate tables were prepared by interpolation and extrapolation of limited available data, especially by interpolations over chemical families of species. Original estimations have been smoothed by use of a modification for the CDC-6600 computer of the Lewis Research Center PACl Program which was originally prepared for the IBM-7094 computer Summary graphs for various families show reasonably consistent curvefit values, anchored by properties of existing species in the JANAF tables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Plot+AND+analysis&pg=6&id=EJ1162369','ERIC'); return false;" href="https://eric.ed.gov/?q=Plot+AND+analysis&pg=6&id=EJ1162369"><span>The Co-Emergence of Aggregate and Modelling Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Aridor, Keren; Ben-Zvi, Dani</p> <p>2017-01-01</p> <p>This article examines how two processes--reasoning with statistical modelling of a real phenomenon and aggregate reasoning--can co-emerge. We focus in this case study on the emergent reasoning of two fifth graders (aged 10) involved in statistical data analysis, informal inference, and modelling activities using TinkerPlots™. We describe nine…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED364556.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED364556.pdf"><span>Proportional Reasoning of Preservice Elementary Education Majors: An Epistemic Model of the Proportional Reasoning Construct.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Fleener, M. Jayne</p> <p></p> <p>Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/945757','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/945757"><span>Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Crull, E W; Brown Jr., C G; Perkins, M P</p> <p>2008-07-30</p> <p>For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ChJME..25.1210L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ChJME..25.1210L"><span>Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Yong; He, Ting; Zeng, Zhixin</p> <p>2012-11-01</p> <p>The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110023926','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110023926"><span>Detecting Edges in Images by Use of Fuzzy Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dominguez, Jesus A.; Klinko, Steve</p> <p>2003-01-01</p> <p>A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26548535','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26548535"><span>[Venous thromboembolic risk during repatriation for medical reasons].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stansal, A; Perrier, E; Coste, S; Bisconte, S; Manen, O; Lazareth, I; Conard, J; Priollet, P</p> <p>2015-12-01</p> <p>In France, approximately 3000 people are repatriated every year, either in a civil situation by insurers. Repatriation also concerns French army soldiers. The literature is scarce on the topic of venous thromboembolic risk and its prevention during repatriation for medical reasons, a common situation. Most studies have focused on the association between venous thrombosis and travel, a relationship recognized more than 60 years ago but still subject to debate. Examining the degree of venous thromboembolic risk during repatriation for medical reasons must take into account several parameters, related to the patient, to comorbid conditions and to repatriation modalities. Appropriate prevention must be determined on an individual basis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/5588204','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/5588204"><span>Survey of HEPA filter applications and experience at Department of Energy sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Carbaugh, E.H.</p> <p>1981-11-01</p> <p>Results indicated that approximately 58% of the filters surveyed were changed out in the 1977 to 1979 study period and some 18% of all filters were changed out more than once. Most changeouts (60%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. The next most recurrent reasons for changeout and their percentage changeouts were leak test failure (15%) and preventive maintenance service life limit (12%). An average filter service life was calculated to be 3.0 years with a 2.0-year standard deviation. The labor required for filter changeout was calculated as 1.5more » manhours per filter changed. Filter failures occurred with approximately 12% of all installed filters. Most failures (60%) occurred for unknown reasons and handling or installation damage accounted for an additional 20% of all failures. Media ruptures, filter frame failures and seal failures occurred with approximately equal frequency at 5 to 6% each. Subjective responses to the questionnaire indicate problems are: need for improved acid and moisture resistant filters; filters more readily disposable as radioactive waste; improved personnel training in filter handling and installation; and need for pretreatment of air prior to HEPA filtration.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001PhDT........78T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001PhDT........78T"><span>Three-dimensional inversion of multisource array electromagnetic data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tartaras, Efthimios</p> <p></p> <p>Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008Sc%26Ed..17..537A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008Sc%26Ed..17..537A"><span>Replication and Pedagogy in the History of Psychology VI: Egon Brunswik on Perception and Explicit Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Athy, Jeremy; Friedrich, Jeff; Delany, Eileen</p> <p>2008-05-01</p> <p>Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=capacitors&pg=5&id=EJ488648','ERIC'); return false;" href="https://eric.ed.gov/?q=capacitors&pg=5&id=EJ488648"><span>Equal Plate Charges on Series Capacitors?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Illman, B. L.; Carlson, G. T.</p> <p>1994-01-01</p> <p>Provides a line of reasoning in support of the contention that the equal charge proposition is at best an approximation. Shows how the assumption of equal plate charge on capacitors in series contradicts the conservative nature of the electric field. (ZWH)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/19793','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/19793"><span>An empirical relationship between mesoscale carbon monoxide concentrations and vehicular emission rates : final report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1979-01-01</p> <p>Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA461069','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA461069"><span>Approximate Reasoning: Past, Present, Future</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1990-06-27</p> <p>This note presents a personal view of the state of the art in the representation and manipulation of imprecise and uncertain information by automated ... processing systems. To contrast their objectives and characteristics with the sound deductive procedures of classical logic, methodologies developed</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA216474','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA216474"><span>Advanced Concepts and Methods of Approximate Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1989-12-01</p> <p>immeasurably by numerous conversations and discussions with Nadal Bat- tle, Hamid Berenji , Piero Bonissone, Bernadette Bouchon-Meunier, Miguel Delgado, Di...comments of Claudi Alsina, Hamid Berenji , Piero Bonissone, Didier Dubois, Francesc Esteva, Oscar Firschein, Marty Fischler, Pascal Fua, Maria Angeles</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170006105&hterms=hydrogen&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dhydrogen','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170006105&hterms=hydrogen&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dhydrogen"><span>On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20170006105'); toggleEditAbsImage('author_20170006105_show'); toggleEditAbsImage('author_20170006105_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20170006105_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20170006105_hide"></p> <p>2017-01-01</p> <p>We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24038305','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24038305"><span>Increasing the open-circuit voltage in high-performance organic photovoltaic devices through conformational twisting of an indacenodithiophene-based conjugated polymer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Chih-Ping; Hsu, Hsiang-Lin</p> <p>2013-10-01</p> <p>A fused ladder indacenodithiophene (IDT)-based donor-acceptor (D-A)-type alternating conjugated polymer, PIDTHT-BT, presenting n-hexylthiophene conjugated side chains is prepared. By extending the degree of intramolecular repulsion through the conjugated side chain moieties, an energy level for the highest occupied molecular orbital (HOMO) of -5.46 eV--a value approximately 0.27 eV lower than that of its counterpart PIDTDT-BT--is obtained, subsequently providing a fabricated solar cell with a high open-circuit voltage of approximately 0.947 V. The hole mobility (determined using the space charge-limited current model) in a blend film containing 20 wt% PIDTHT-BT) and 80 wt% [6,6]-phenyl-C71 butyric acid methyl ester (PC71 BM) is 2.2 × 10(-9) m(2) V(-1) s(-1), which is within the range of reasonable values for applications in organic photovoltaics. The power conversion efficiency is 4.5% under simulated solar illumination (AM 1.5G, 100 mW cm(-2)). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986JAP....60.3576B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986JAP....60.3576B"><span>Bounds on the conductivity of a suspension of random impenetrable spheres</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beasley, J. D.; Torquato, S.</p> <p>1986-11-01</p> <p>We compare the general Beran bounds on the effective electrical conductivity of a two-phase composite to the bounds derived by Torquato for the specific model of spheres distributed throughout a matrix phase. For the case of impenetrable spheres, these bounds are shown to be identical and to depend on the microstructure through the sphere volume fraction φ2 and a three-point parameter ζ2, which is an integral over a three-point correlation function. We evaluate ζ2 exactly through third order in φ2 for distributions of impenetrable spheres. This expansion is compared to the analogous results of Felderhof and of Torquato and Lado, all of whom employed the superposition approximation for the three-particle distribution function involved in ζ2. The results indicate that the exact ζ2 will be greater than the value calculated under the superposition approximation. For reasons of mathematical analogy, the results obtained here apply as well to the determination of the thermal conductivity, dielectric constant, and magnetic permeability of composite media and the diffusion coefficient of porous media.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910015015','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910015015"><span>An approximate methods approach to probabilistic structural analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.</p> <p>1989-01-01</p> <p>A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/943394-glass-viscosity-function-temperature-composition-model-based-adam-gibbs-equation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/943394-glass-viscosity-function-temperature-composition-model-based-adam-gibbs-equation"><span>GLASS VISCOSITY AS A FUNCTION OF TEMPERATURE AND COMPOSITION: A MODEL BASED ON ADAM-GIBBS EQUATION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hrma, Pavel R.</p> <p>2008-07-01</p> <p>Within the temperature range and composition region of processing and product forming, the viscosity of commercial and waste glasses spans over 12 orders of magnitude. This paper shows that a generalized Adam-Gibbs relationship reasonably approximates the real behavior of glasses with four temperature-independent parameters of which two are linear functions of the composition vector. The equation is subjected to two constraints, one requiring that the viscosity-temperature relationship approaches the Arrhenius function at high temperatures with a composition-independent pre-exponential factor and the other that the viscosity value is independent of composition at the glass-transition temperature. Several sets of constant coefficients weremore » obtained by fitting the generalized Adam-Gibbs equation to data of two glass families: float glass and Hanford waste glass. Other equations (the Vogel-Fulcher-Tammann equation, original and modified, the Avramov equation, and the Douglass-Doremus equation) were fitted to float glass data series and compared with the Adam-Gibbs equation, showing that Adam-Gibbs glass appears an excellent approximation of real glasses even as compared with other candidate constitutive relations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040030497','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040030497"><span>System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann</p> <p>2003-01-01</p> <p>A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3460979','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3460979"><span>Models of cylindrical bubble pulsation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ilinskii, Yurii A.; Zabolotskaya, Evgenia A.; Hay, Todd A.; Hamilton, Mark F.</p> <p>2012-01-01</p> <p>Three models are considered for describing the dynamics of a pulsating cylindrical bubble. A linear solution is derived for a cylindrical bubble in an infinite compressible liquid. The solution accounts for losses due to viscosity, heat conduction, and acoustic radiation. It reveals that radiation is the dominant loss mechanism, and that it is 22 times greater than for a spherical bubble of the same radius. The predicted resonance frequency provides a basis of comparison for limiting forms of other models. The second model considered is a commonly used equation in Rayleigh-Plesset form that requires an incompressible liquid to be finite in extent in order for bubble pulsation to occur. The radial extent of the liquid becomes a fitting parameter, and it is found that considerably different values of the parameter are required for modeling inertial motion versus acoustical oscillations. The third model was developed by V. K. Kedrinskii [Hydrodynamics of Explosion (Springer, New York, 2005), pp. 23–26] in the form of the Gilmore equation for compressible liquids of infinite extent. While the correct resonance frequency and loss factor are not recovered from this model in the linear approximation, it provides reasonable agreement with observations of inertial motion. PMID:22978863</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21913760','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21913760"><span>"Adiabatic-hindered-rotor" treatment of the parahydrogen-water complex.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeng, Tao; Li, Hui; Le Roy, Robert J; Roy, Pierre-Nicholas</p> <p>2011-09-07</p> <p>Inspired by a recent successful adiabatic-hindered-rotor treatment for parahydrogen pH(2) in CO(2)-H(2) complexes [H. Li, P.-N. Roy, and R. J. Le Roy, J. Chem. Phys. 133, 104305 (2010); H. Li, R. J. Le Roy, P.-N. Roy, and A. R. W. McKellar, Phys. Rev. Lett. 105, 133401 (2010)], we apply the same approximation to the more challenging H(2)O-H(2) system. This approximation reduces the dimension of the H(2)O-H(2) potential from 5D to 3D and greatly enhances the computational efficiency. The global minimum of the original 5D potential is missing from the adiabatic 3D potential for reasons based on solution of the hindered-rotor Schrödinger equation of the pH(2). Energies and wave functions of the discrete rovibrational levels of H(2)O-pH(2) complexes obtained from the adiabatic 3D potential are in good agreement with the results from calculations with the full 5D potential. This comparison validates our approximation, although it is a relatively cruder treatment for pH(2)-H(2)O than it is for pH(2)-CO(2). This adiabatic approximation makes large-scale simulations of H(2)O-pH(2) systems possible via a pairwise additive interaction model in which pH(2) is treated as a point-like particle. The poor performance of the diabatically spherical treatment of pH(2) rotation excludes the possibility of approximating pH(2) as a simple sphere in its interaction with H(2)O. © 2011 American Institute of Physics</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4412494','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4412494"><span>Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher</p> <p>2015-01-01</p> <p>We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA573150','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA573150"><span>Numerical Boundary Conditions for Specular Reflection in a Level-Sets-Based Wavefront Propagation Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-12-01</p> <p>acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ApPhL.102f1912P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ApPhL.102f1912P"><span>Quantitative scanning thermal microscopy of ErAs/GaAs superlattice structures grown by molecular beam epitaxy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Park, K. W.; Nair, H. P.; Crook, A. M.; Bank, S. R.; Yu, E. T.</p> <p>2013-02-01</p> <p>A proximal probe-based quantitative measurement of thermal conductivity with ˜100-150 nm lateral and vertical spatial resolution has been implemented. Measurements on an ErAs/GaAs superlattice structure grown by molecular beam epitaxy with 3% volumetric ErAs content yielded thermal conductivity at room temperature of 9 ± 2 W/m K, approximately five times lower than that for GaAs. Numerical modeling of phonon scattering by ErAs nanoparticles yielded thermal conductivities in reasonable agreement with those measured experimentally and provides insight into the potential influence of nanoparticle shape on phonon scattering. Measurements of wedge-shaped samples created by focused ion beam milling provide direct confirmation of depth resolution achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97k3001L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97k3001L"><span>Improved perturbative QCD formalism for Bc meson decays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Xin; Li, Hsiang-nan; Xiao, Zhen-Jun</p> <p>2018-06-01</p> <p>We derive the kT resummation for doubly heavy-flavored Bc meson decays by including the charm quark mass effect into the known formula for a heavy-light system. The resultant Sudakov factor is employed in the perutrbative QCD study of the "golden channel" Bc+→J /ψ π+. With a reasonable model for the Bc meson distribution amplitude, which maintains approximate on-shell conditions of both the partonic bottom and charm quarks, it is observed that the imaginary piece of the Bc→J /ψ transition form factor appears to be power suppressed, and the Bc+→J /ψ π+ branching ratio is not lower than 10-3. The above improved perturbative QCD formalism is applicable to Bc meson decays to other charmonia and charmed mesons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvA..93f2707O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvA..93f2707O"><span>Comparison of experimental and theoretical triple differential cross sections for the single ionization of C O2 (1 πg ) by electron impact</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ozer, Zehra N.; Ali, Esam; Dogan, Mevlut; Yavuz, Murat; Alwan, Osman; Naja, Adnan; Chuluunbaatar, Ochbadrakh; Joulakian, Boghos B.; Ning, Chuan-Gang; Colgan, James; Madison, Don</p> <p>2016-06-01</p> <p>Experimental and theoretical triple differential cross sections for intermediate-energy (250 eV) electron-impact single ionization of the CO2 are presented for three fixed projectile scattering angles. Results are presented for ionization of the outermost 1 πg molecular orbital of C O2 in a coplanar asymmetric geometry. The experimental data are compared to predictions from the three-center Coulomb continuum approximation for triatomic targets, and the molecular three-body distorted wave (M3DW) model. It is observed that while both theories are in reasonable qualitative agreement with experiment, the M3DW is in the best overall agreement with experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JHEP...10..048L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JHEP...10..048L"><span>A paradox on quantum field theory of neutrino mixing and oscillations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Yu-Feng; Liu, Qiu-Yu</p> <p>2006-10-01</p> <p>Neutrino mixing and oscillations in quantum field theory framework had been studied before, which shew that the Fock space of flavor states is unitarily inequivalent to that of mass states (inequivalent vacua model). A paradox emerges when we use these neutrino weak states to calculate the amplitude of W boson decay. The branching ratio of W+→e++νμ to W+→e++νe is approximately at the order of O(mi2/k2). This existence of flavor changing currents contradicts to the Hamiltonian we started from, and the usual knowledge about weak processes. Also, negative energy neutrinos (or violating the principle of energy conservation) appear in this framework. We discuss possible reasons for the appearance of this paradox.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFM.U43A0040A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFM.U43A0040A"><span>Measurements of Unexpected Ozone Loss in a Nighttime Space Shuttle Exhaust Plume: Implications for Geo-Engineering Projects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Avallone, L. M.; Kalnajs, L. E.; Toohey, D. W.; Ross, M. N.</p> <p>2008-12-01</p> <p>Measurements of ozone, carbon dioxide and particulate water were made in the nighttime exhaust plume of the Space Shuttle (STS-116) on 9 December 2006 as part of the PUMA/WAVE campaign (Plume Ultrafast Measurements Acquisition/WB-57F Ascent Video Experiment). The launch took place from Kennedy Space Center at 8:47 pm (local time) on a moonless night and the WB-57F aircraft penetrated the shuttle plume approximately 25 minutes after launch in the lowermost stratosphere. Ozone loss is not predicted to occur in a nighttime Space Shuttle plume since it has long been assumed that the main ozone loss mechanism associated with rocket emissions requires solar photolysis to drive several chlorine-based catalytic cycles. However, the nighttime in situ observations show an unexpected loss of ozone of approximately 250 ppb in the evolving exhaust plume, inconsistent with model predictions. We will present the observations of the shuttle exhaust plume composition and the results of photochemical models of the Space Shuttle plume. We will show that models constrained by known rocket emission kinetics, including afterburning, and reasonable plume dispersion rates, based on the CO2 observations, cannot explain the observed ozone loss. We will propose potential explanations for the lack of agreement between models and the observations, and will discuss the implications of these explanations for our understanding of the composition of rocket emissions. We will describe the potential consequences of the observed ozone loss for long-term damage to the stratospheric ozone layer should geo-engineering projects based on rocket launches be employed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9034E..0EB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9034E..0EB"><span>Registration of organs with sliding interfaces and changing topologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.</p> <p>2014-03-01</p> <p>Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140008611','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140008611"><span>Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barth, Timothy J.</p> <p>2014-01-01</p> <p>This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995PhRvB..5213636B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995PhRvB..5213636B"><span>Superconductivity in the two-dimensional Hubbard model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beenen, J.; Edwards, D. M.</p> <p>1995-11-01</p> <p>Quasiparticle bands of the two-dimensional Hubbard model are calculated using the Roth two-pole approximation to the one-particle Green's function. Excellent agreement is obtained with recent Monte Carlo calculations, including an anomalous volume of the Fermi surface near half-filling, which can possibly be explained in terms of a breakdown of Fermi liquid theory. The calculated bands are very flat around the (π,0) points of the Brillouin zone in agreement with photoemission measurements of cuprate superconductors. With doping there is a shift in spectral weight from the upper band to the lower band. The Roth method is extended to deal with superconductivity within a four-pole approximation allowing electron-hole mixing. It is shown that triplet p-wave pairing never occurs. A self-consistent solution with singlet dx2-y2-wave pairing is found and optimal doping occurs when the van Hove singularity, corresponding to the flat band part, lies at the Fermi level. Nearest-neighbor antiferromagnetic correlations play an important role in flattening the bands near the Fermi level and in favoring superconductivity. However, the mechanism for superconductivity is a local one, in contrast to spin-fluctuation exchange models. For reasonable values of the hopping parameter the transition temperature Tc is in the range 10-100 K. The optimum doping δc lies between 0.14 and 0.25, depending on the ratio U/t. The gap equation has a BCS-like form and 2Δmax/kTc~=4.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70094176','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70094176"><span>A screening tool for delineating subregions of steady recharge within groundwater models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky</p> <p>2014-01-01</p> <p>We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/7517070','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/7517070"><span>Unanticipated benefits of automotive emission control: reduction in fatalities by motor vehicle exhaust gas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shelef, M</p> <p>1994-05-23</p> <p>In 1970, before the implementation of strict controls on emissions in motor vehicle exhaust gas (MVEG), the annual USA incidence of fatal accidents by carbon monoxide in the MVEG was approximately 800 and that of suicides approximately 2000 (somewhat less than 10% of total suicides). In 1987, there were approximately 400 fatal accidents and approximately 2700 suicides by MVEG. Accounting for the growth in population and vehicle registration, the yearly lives saved in accidents by MVEG were approximately 1200 in 1987 and avoided suicides approximately 1400. The decrease in accidents continues unabated while the decrease in expected suicides by MVEG reached a plateau in 1981-1983. The reasons for this disparity are discussed. Juxtaposition of these results with the projected cancer risk avoidance of less than 500 annually in 2005 (as compared with 1986) plainly shows that, in terms of mortality, the unanticipated benefits of emission control far overshadow the intended benefits. With the spread of MVEG control these benefits will accrue worldwide.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2012-10-19/pdf/2012-25738.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2012-10-19/pdf/2012-25738.pdf"><span>77 FR 64367 - Submission for OMB Review; Comment Request</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2012-10-19</p> <p>... burden associated with money market funds' adoption of certain policies and procedures aimed at ensuring that these funds meet reasonably foreseeable shareholder redemptions (the ``general liquidity... complying with the general liquidity requirement. Approximately 10 money market funds were newly registered...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED269220.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED269220.pdf"><span>A Summary of Research in Science Education--1984.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lawson, Anton E.; And Others</p> <p></p> <p>This review covers approximately 300 studies, including journal articles, dissertations, and papers presented at conferences. The studies are organized under these major headings: status surveys; scientific reasoning; elementary school science (student achievement, student conceptions/misconceptions, student curiosity/attitudes, teaching methods,…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=euthanasia&pg=4&id=EJ308332','ERIC'); return false;" href="https://eric.ed.gov/?q=euthanasia&pg=4&id=EJ308332"><span>Holocaust II?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wolfensberger, Wolf</p> <p>1984-01-01</p> <p>The author estimates that approximately 200,000 lives of devalued disabled people (including infants and older adults) are taken or abbreviated annually through euthanasia and termination of life-supporting measures. He cites possible reasons for limited public outcry against what he compares with the holocaust. (CL)</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17306751','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17306751"><span>Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong</p> <p>2007-09-01</p> <p>Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014Ocgy...54..557K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014Ocgy...54..557K"><span>Application of interleaving models for the description of intrusive layering at the fronts of deep polar water in the Eurasian Basin (Arctic)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuzmina, N. P.; Zhurbas, N. V.; Emelianov, M. V.; Pyzhevich, M. L.</p> <p>2014-09-01</p> <p>Interleaving models of pure thermohaline and baroclinic frontal zones are applied to describe intrusions at the fronts found in the upper part of the Deep Polar Water (DPW) when the stratification was absolutely stable. It is assumed that differential mixing is the main mechanism of the intrusion formation. Important parameters of the interleaving such as the growth rate, vertical scale, and slope of the most unstable modes relative to the horizontal plane are calculated. It was found that the interleaving model for a pure thermohaline front satisfactory describes the important intrusion parameters observed at the frontal zone. In the case of a baroclinic front, satisfactory agreement over all the interleaving parameters is observed between the model calculations and observations provided that the vertical momentum diffusivity significantly exceeds the corresponding coefficient of mass diffusivity. Under specific (reasonable) constraints of the vertical momentum diffusivity, the most unstable mode has a vertical scale approximately two-three times smaller than the vertical scale of the observed intrusions. A thorough discussion of the results is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890007219&hterms=Lte&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DLte','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890007219&hterms=Lte&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DLte"><span>Accretion disk modeling of AGN continuum using non-LTE stellar atmospheres. [active galactic nuclei (AGN)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sun, Wei-Hsin; Malkan, Matthew A.</p> <p>1988-01-01</p> <p>Active galactic nuclei (AGN) accretion disk spectra were calculated using non-LTE stellar atmosphere models for Kerr and Schwarzschild geometries. It is found that the Lyman limit absorption edge, probably the most conclusive observational evidence for the accretion disk, would be drastically distorted and displaced by the relativistic effects from the large gravitational field of the central black hole and strong Doppler motion of emitting material on the disk surface. These effects are especially pronounced in the Kerr geometry. The strength of the Lyman limit absorption is very sensitive to the surface gravity in the stellar atmosphere models used. For models at the same temperature but different surface gravities, the strength of the Lyman edge exhibits an almost exponential decrease as the surface gravity approach the Eddington limit, which should approximate the thin disk atmosphere. The relativistic effects as well as the vanishing of the Lyman edge at the Eddington gravity may be the reasons that not many Lyman edges in the rest frames of AGNs and quasars are found.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16350367','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16350367"><span>Preliminary evaluation of the Community Multiscale Air Quality model for 2002 over the Southeastern United States.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia</p> <p>2005-11-01</p> <p>The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve nitrate performance and addressing missing processes in the secondary organic aerosol module to improve OC performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11699120','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11699120"><span>The emotional dog and its rational tail: a social intuitionist approach to moral judgment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haidt, J</p> <p>2001-10-01</p> <p>Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done by individuals and emphasizes instead the importance of social and cultural influences. The model is an intuitionist model in that it states that moral judgment is generally the result of quick, automatic evaluations (intuitions). The model is more consistent that rationalist models with recent findings in social, cultural, evolutionary, and biological psychology, as well as in anthropology and primatology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020044431','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020044431"><span>Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.</p> <p>2002-01-01</p> <p>Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1266397-new-test-statistic-climate-models-includes-field-spatial-dependencies-using-gaussian-markov-random-fields','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1266397-new-test-statistic-climate-models-includes-field-spatial-dependencies-using-gaussian-markov-random-fields"><span>A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel</p> <p>2016-07-20</p> <p>A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013OcMod..72..104T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013OcMod..72..104T"><span>Inference of turbulence parameters from a ROMS simulation using the k-ε closure scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thyng, Kristen M.; Riley, James J.; Thomson, Jim</p> <p>2013-12-01</p> <p>Comparisons between high resolution turbulence data from Admiralty Inlet, WA (USA), and a 65-meter horizontal grid resolution simulation using the hydrostatic ocean modelling code, Regional Ocean Modeling System (ROMS), show that the model's k-ε turbulence closure scheme performs reasonably well. Turbulent dissipation rates and Reynolds stresses agree within a factor of two, on average. Turbulent kinetic energy (TKE) also agrees within a factor of two, but only for motions within the observed inertial sub-range of frequencies (i.e., classic approximately isotropic turbulence). TKE spectra from the observations indicate that there is significant energy at lower frequencies than the inertial sub-range; these scales are not captured by the model closure scheme nor the model grid resolution. To account for scales not present in the model, the inertial sub-range is extrapolated to lower frequencies and then integrated to obtain an inferred, diagnostic total TKE, with improved agreement with the observed total TKE. The realistic behavior of the dissipation rate and Reynolds stress, combined with the adjusted total TKE, imply that ROMS simulations can be used to understand and predict spatial and temporal variations in turbulence. The results are suggested for application to siting tidal current turbines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035362','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035362"><span>Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca</p> <p>2011-01-01</p> <p>Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy...50.1719L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy...50.1719L"><span>The epistemological status of general circulation models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Loehle, Craig</p> <p>2018-03-01</p> <p>Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70056198','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70056198"><span>Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca</p> <p>2013-01-01</p> <p>Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27083088','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27083088"><span>Computer models and the evidence of anthropogenic climate change: An epistemology of variety-of-evidence inferences and robustness analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vezér, Martin A</p> <p>2016-04-01</p> <p>To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley's (2004) distinction between evidential strength and security, and Lloyd's (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22493830-monte-carlo-simulations-ionization-potential-depression-dense-plasmas','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22493830-monte-carlo-simulations-ionization-potential-depression-dense-plasmas"><span>Monte Carlo simulations of ionization potential depression in dense plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Stransky, M., E-mail: stransky@fzu.cz</p> <p></p> <p>A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up tomore » 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830026140','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830026140"><span>The continuous similarity model of bulk soil-water evaporation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Clapp, R. B.</p> <p>1983-01-01</p> <p>The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1266397-new-test-statistic-climate-models-includes-field-spatial-dependencies-using-gaussian-markov-random-fields','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1266397-new-test-statistic-climate-models-includes-field-spatial-dependencies-using-gaussian-markov-random-fields"><span>A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel</p> <p></p> <p>A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013488&hterms=Mass+standards&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DMass%2Bstandards','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013488&hterms=Mass+standards&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DMass%2Bstandards"><span>Automatic Determination of the Conic Coronal Mass Ejection Model Parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pulkkinen, A.; Oates, T.; Taktakishvili, A.</p> <p>2009-01-01</p> <p>Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MNRAS.449.1505S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MNRAS.449.1505S"><span>CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.</p> <p>2015-05-01</p> <p>We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24681649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24681649"><span>Groundwater pumping effects on contaminant loading management in agricultural regions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Park, Dong Kyu; Bae, Gwang-Ok; Kim, Seong-Kyun; Lee, Kang-Kun</p> <p>2014-06-15</p> <p>Groundwater pumping changes the behavior of subsurface water, including the location of the water table and characteristics of the flow system, and eventually affects the fate of contaminants, such as nitrate from agricultural fertilizers. The objectives of this study were to demonstrate the importance of considering the existing pumping conditions for contaminant loading management and to develop a management model to obtain a contaminant loading design more appropriate and practical for agricultural regions where groundwater pumping is common. Results from this study found that optimal designs for contaminant loading could be determined differently when the existing pumping conditions were considered. This study also showed that prediction of contamination and contaminant loading management without considering pumping activities might be unrealistic. Motivated by these results, a management model optimizing the permissible on-ground contaminant loading mass together with pumping rates was developed and applied to field investigation and monitoring data from Icheon, Korea. The analytical solution for 1-D unsaturated solute transport was integrated with the 3-D saturated solute transport model in order to approximate the fate of contaminants loaded periodically from on-ground sources. This model was further expanded to manage agricultural contaminant loading in regions where groundwater extraction tends to be concentrated in a specific period of time, such as during the rice-growing season, using a method that approximates contaminant leaching to a fluctuating water table. The results illustrated that the simultaneous management of groundwater quantity and quality was effective and appropriate to the agricultural contaminant loading management and the model developed in this study, which can consider time-variant pumping, could be used to accurately estimate and to reasonably manage contaminant loading in agricultural areas. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1352544-testing-lognormality-galaxy-weak-lensing-convergence-distributions-from-dark-energy-survey-maps','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1352544-testing-lognormality-galaxy-weak-lensing-convergence-distributions-from-dark-energy-survey-maps"><span>Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Clerkin, L.; Kirk, D.; Manera, M.; ...</p> <p>2016-08-30</p> <p>It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22386785','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22386785"><span>Metamodeling and the Critic-based approach to multi-level optimization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J</p> <p>2012-08-01</p> <p>Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. Copyright © 2012 Elsevier Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MNRAS.466.1444C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MNRAS.466.1444C"><span>Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.</p> <p>2017-04-01</p> <p>It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960009062','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960009062"><span>Verification of a three-dimensional resin transfer molding process simulation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson</p> <p>1995-01-01</p> <p>Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900023573&hterms=fashion+models&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dfashion%2Bmodels','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900023573&hterms=fashion+models&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dfashion%2Bmodels"><span>Overcoming limitations of model-based diagnostic reasoning systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.</p> <p>1989-01-01</p> <p>The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=statistics+AND+levels&pg=5&id=EJ851278','ERIC'); return false;" href="https://eric.ed.gov/?q=statistics+AND+levels&pg=5&id=EJ851278"><span>Helping Students Develop Statistical Reasoning: Implementing a Statistical Reasoning Learning Environment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Garfield, Joan; Ben-Zvi, Dani</p> <p>2009-01-01</p> <p>This article describes a model for an interactive, introductory secondary- or tertiary-level statistics course that is designed to develop students' statistical reasoning. This model is called a "Statistical Reasoning Learning Environment" and is built on the constructivist theory of learning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19888432','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19888432"><span>Promoting the self-regulation of clinical reasoning skills in nursing students.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kuiper, R; Pesut, D; Kautz, D</p> <p>2009-10-02</p> <p>The purpose of this paper is to describe the research surrounding the theories and models the authors united to describe the essential components of clinical reasoning in nursing practice education. The research was conducted with nursing students in health care settings through the application of teaching and learning strategies with the Self-Regulated Learning Model (SRL) and the Outcome-Present-State-Test (OPT) Model of Reflective Clinical Reasoning. Standardized nursing languages provided the content and clinical vocabulary for the clinical reasoning task. This descriptive study described the application of the OPT model of clinical reasoning, use of nursing language content, and reflective journals based on the SRL model with 66 undergraduate nursing students over an 8 month period of time. The study tested the idea that self-regulation of clinical reasoning skills can be developed using self-regulation theory and the OPT model. This research supports a framework for effective teaching and learning methods to promote and document learner progress in mastering clinical reasoning skills. Self-regulated Learning strategies coupled with the OPT model suggest benefits of self-observation and self-monitoring during clinical reasoning activities, and pinpoints where guidance is needed for the development of cognitive and metacognitive awareness. Thinking and reasoning about the complexities of patient care needs requires attention to the content, processes and outcomes that make a nursing care difference. These principles and concepts are valuable to clinical decision making for nurses globally as they deal with local, regional, national and international health care issues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17612880','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17612880"><span>Overview of psychiatric ethics IV: the method of casuistry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robertson, Michael; Ryan, Christopher; Walter, Garry</p> <p>2007-08-01</p> <p>The aim of this paper is to describe the method of ethical analysis known as casuistry and consider its merits as a basis of ethical deliberation in psychiatry. Casuistry approximates the legal arguments of common law. It examines ethical dilemmas by adopting a taxonomic approach to 'paradigm' cases, using a technique akin to that of normative analogical reasoning. Casuistry offers a useful method in ethical reasoning through providing a practical means of evaluating the merits of a particular course of action in a particular clinical situation. As a method ethical moral reasoning in psychiatry, casuistry suffers from a paucity of paradigm cases and its failure to fully contextualize ethical dilemmas by relying on common morality theory as its basis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10279817','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10279817"><span>A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shriver, K A</p> <p>1986-01-01</p> <p>Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100033799','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100033799"><span>Proposal for a Joint NASA/KSAT Ka-band RF Propagation Terminal at Svalbard, Norway</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Volosin, Jeffrey; Acosta, Roberto; Nessel, James; McCarthy, Kevin; Caroglanian, Armen</p> <p>2010-01-01</p> <p>This slide presentation discusses the placement of a Ka-band RF Propagation Terminal at Svalbard, Norway. The Near Earth Network (NEN) station would be managed by Kongsberg Satellite Services (KSAT) and would benefit NASA and KSAT. There are details of the proposed NASA/KSAT campaign, and the responsibilities each would agree to. There are several reasons for the placement, a primary reason is comparison with the Alaska site, Based on climatological similarities/differences with Alaska, Svalbard site expected to have good radiometer/beacon agreement approximately 99% of time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920028832&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dberenji','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920028832&hterms=berenji&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dberenji"><span>Using new aggregation operators in rule-based intelligent control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.</p> <p>1990-01-01</p> <p>A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=learning+AND+inference&pg=4&id=EJ1124804','ERIC'); return false;" href="https://eric.ed.gov/?q=learning+AND+inference&pg=4&id=EJ1124804"><span>Logical Reasoning versus Information Processing in the Dual-Strategy Model of Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc</p> <p>2017-01-01</p> <p>One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6131105-common-sense-reasoning-about-petroleum-flow','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6131105-common-sense-reasoning-about-petroleum-flow"><span>Common sense reasoning about petroleum flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rosenberg, S.</p> <p>1981-02-01</p> <p>This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23333418','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23333418"><span>Vapor-phase transport of trichloroethene in an intermediate-scale vadose-zone system: retention processes and tracer-based prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Costanza-Robinson, Molly S; Carlson, Tyson D; Brusseau, Mark L</p> <p>2013-02-01</p> <p>Gas-phase transport experiments were conducted using a large weighing lysimeter to evaluate retention processes for volatile organic compounds (VOCs) in water-unsaturated (vadose-zone) systems, and to test the utility of gas-phase tracers for predicting VOC retardation. Trichloroethene (TCE) served as a model VOC, while trichlorofluoromethane (CFM) and heptane were used as partitioning tracers to independently characterize retention by water and the air-water interface, respectively. Retardation factors for TCE ranged between 1.9 and 3.5, depending on water content. The results indicate that dissolution into the bulk water was the primary retention mechanism for TCE under all conditions studied, contributing approximately two-thirds of the total measured retention. Accumulation at the air-water interface comprised a significant fraction of the observed retention for all experiments, with an average contribution of approximately 24%. Sorption to the solid phase contributed approximately 10% to retention. Water contents and air-water interfacial areas estimated based on the CFM and heptane tracer data, respectively, were similar to independently measured values. Retardation factors for TCE predicted using the partitioning-tracer data were in reasonable agreement with the measured values. These results suggest that gas-phase tracer tests hold promise for characterizing the retention and transport of VOCs in the vadose-zone. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100003059','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100003059"><span>The Chandra M10l Megasecond: Diffuse Emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kuntz, K. D.; Snowden, S. L.</p> <p>2009-01-01</p> <p>Because MIOl is nearly face-on, it provides an excellent laboratory in which to study the distribution of X-ray emitting gas in a typical late-type spiral galaxy. We obtained a Chandra observation with a cumulative exposure of roughly 1 Ms to study the diffuse X-ray emission in MlOl. The bulk of the X-ray emission is correlated with the star formation traced by the FUV emission. The global FUV/Xray correlation is non-linear (the X-ray surface brightness is roughly proportional to the square root of the FUV surface brightness) and the small-scale correlation is poor, probably due to the delay between the FUV emission and the X-ray production ill star-forming regions. The X-ray emission contains only minor contributions from unresolved stars (approximates less than 3%), unresolved X-ray point sources (approximates less than 4%), and individual supernova remnants (approximates 3%). The global spectrum of the diffuse emission can be reasonably well fitted with a three component thermal model, but the fitted temperatures are not unique; many distributions of emission measure can produce the same temperatures when observed with the current CCD energy resolution. The spectrum of the diffuse emission depends on the environment; regions with higher X-ray surface brightnesses have relatively stronger hard components, but there is no significant evidence that the temperatures of the emitting components increase with surface brightness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27176044','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27176044"><span>Logical reasoning versus information processing in the dual-strategy model of reasoning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc</p> <p>2017-01-01</p> <p>One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both kinds of strategy has been supported by several recent studies. These have shown that statistical reasoners make inferences based on using information about premises in order to generate a likelihood estimate of conclusion probability. However, while results concerning counterexample reasoners are consistent with a counterexample detection model, these results could equally be interpreted as indicating a greater sensitivity to logical form. In order to distinguish these 2 interpretations, in Studies 1 and 2, we presented reasoners with Modus ponens (MP) inferences with statistical information about premise strength and in Studies 3 and 4, naturalistic MP inferences with premises having many disabling conditions. Statistical reasoners accepted the MP inference more often than counterexample reasoners in Studies 1 and 2, while the opposite pattern was observed in Studies 3 and 4. Results show that these strategies must be defined in terms of information processing, with no clear relations to "logical" reasoning. These results have additional implications for the underlying debate about the nature of human reasoning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=atmospheric&pg=6&id=EJ1011137','ERIC'); return false;" href="https://eric.ed.gov/?q=atmospheric&pg=6&id=EJ1011137"><span>Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao</p> <p>2013-01-01</p> <p>Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhPl...18c4702T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhPl...18c4702T"><span>Response to ``Comment on `Scalings for radiation from plasma bubbles' '' [Phys. Plasmas 18, 034701 (2011)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thomas, A. G. R.</p> <p>2011-03-01</p> <p>In the preceding Comment, Corde, Stordeur, and Malka claim that the trapping threshold derived in my recent paper is incorrect. Their principal argument is that the elliptical orbits I used are not exact solutions of the equation of motion in the fields of the bubble. The original paper never claimed this—rather I claimed that the use of elliptical orbits was a reasonable approximation, which I based on observations from particle-in-cell simulations. Integration of the equation of motion for analytical expressions for idealized bubble fields (either analytically [I. Kostyukov, E. Nerush, A. Pukhov, and V. Seredov, Phys. Rev. Lett. 103, 175003 (2009)] or numerically [S. Corde, A. Stordeur, and V. Malka, "Comment on `Scalings for radiation from plasma bubbles,' " Phys. Plasmas 18, 034701 (2011)]) produces a trapping threshold wholly inconsistent with experiments and full particle-in-cell (PIC) simulations (e.g., requiring an estimated laser intensity of a0˜30 for ne˜1019 cm-3). The inconsistency in the particle trajectories between PIC and the numeric model used by the comment authors arises due to the fact that the analytical fields are only approximately true for "real" plasma bubbles, and lack certain key features of the field structure. Two possible methods of resolution to this inconsistency are either to find ever more complicated but accurate models for the bubble fields or to find approximate solutions to the equations of motion that capture the essential features of the self-consistent electron trajectories. The latter, heuristic approach used in my recent paper produced a threshold that is better matched to experimental observations. In this reply, I will also revisit the problem and examine the relationship between bubble radius and electron momentum at the point of trapping without reference to a particular trajectory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001MNRAS.327..557M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001MNRAS.327..557M"><span>Gravitational lensing in modified Newtonian dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mortlock, Daniel J.; Turner, Edwin L.</p> <p>2001-10-01</p> <p>Modified Newtonian dynamics (MOND) is an alternative theory of gravity that aims to explain large-scale dynamics without recourse to any form of dark matter. However, the theory is incomplete, lacking a relativistic counterpart, and so makes no definite predictions about gravitational lensing. The most obvious form that MONDian lensing might take is that photons experience twice the deflection of massive particles moving at the speed of light, as in general relativity (GR). In such a theory there is no general thin-lens approximation (although one can be made for spherically symmetric deflectors), but the three-dimensional acceleration of photons is in the same direction as the relativistic acceleration would be. In regimes where the deflector can reasonably be approximated as a single point-mass (specifically low-optical depth microlensing and weak galaxy-galaxy lensing), this naive formulation is consistent with observations. Forthcoming galaxy-galaxy lensing data and the possibility of cosmological microlensing have the potential to distinguish unambiguously between GR and MOND. Some tests can also be performed with extended deflectors, for example by using surface brightness measurements of lens galaxies to model quasar lenses, although the breakdown of the thin-lens approximation allows an extra degree of freedom. None the less, it seems unlikely that simple ellipsoidal galaxies can satisfy both constraints. Furthermore, the low-density universe implied by MOND must be completely dominated by the cosmological constant (to fit microwave background observations), and such models are at odds with the low frequency of quasar lenses. These conflicts might be resolved by a fully consistent relativistic extension to MOND; the alternative is that MOND is not an accurate description of the Universe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..95s5158B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..95s5158B"><span>Convergence behavior of the random phase approximation renormalized correlation energy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bates, Jefferson E.; Sensenig, Jonathon; Ruzsinszky, Adrienn</p> <p>2017-05-01</p> <p>Based on the random phase approximation (RPA), RPA renormalization [J. E. Bates and F. Furche, J. Chem. Phys. 139, 171103 (2013), 10.1063/1.4827254] is a robust many-body perturbation theory that works for molecules and materials because it does not diverge as the Kohn-Sham gap approaches zero. Additionally, RPA renormalization enables the simultaneous calculation of RPA and beyond-RPA correlation energies since the total correlation energy is the sum of a series of independent contributions. The first-order approximation (RPAr1) yields the dominant beyond-RPA contribution to the correlation energy for a given exchange-correlation kernel, but systematically underestimates the total beyond-RPA correction. For both the homogeneous electron gas model and real systems, we demonstrate numerically that RPA renormalization beyond first order converges monotonically to the infinite-order beyond-RPA correlation energy for several model exchange-correlation kernels and that the rate of convergence is principally determined by the choice of the kernel and spin polarization of the ground state. The monotonic convergence is rationalized from an analysis of the RPA renormalized correlation energy corrections, assuming the exchange-correlation kernel and response functions satisfy some reasonable conditions. For spin-unpolarized atoms, molecules, and bulk solids, we find that RPA renormalization is typically converged to 1 meV error or less by fourth order regardless of the band gap or dimensionality. Most spin-polarized systems converge at a slightly slower rate, with errors on the order of 10 meV at fourth order and typically requiring up to sixth order to reach 1 meV error or less. Slowest to converge, however, open-shell atoms present the most challenging case and require many higher orders to converge.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=value+AND+chain+AND+model&pg=7&id=ED304994','ERIC'); return false;" href="https://eric.ed.gov/?q=value+AND+chain+AND+model&pg=7&id=ED304994"><span>The Emergence of Metaethical Reasoning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Langford, Peter E.</p> <p></p> <p>A multidimensional model of the growth of moral reasoning is described that is significantly different from those proposed by Kohlberg and Piaget. A study that tests several aspects of the model on university students is reported. The suggestion that well-developed chains of reasons are a prerequisite for the emergence of metaethical reasoning was…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=memory+AND+extraction&pg=3&id=EJ738629','ERIC'); return false;" href="https://eric.ed.gov/?q=memory+AND+extraction&pg=3&id=EJ738629"><span>Cognitive Trait Modelling: The Case of Inductive Reasoning Ability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kinshuk, Taiyu Lin; McNab, Paul</p> <p>2006-01-01</p> <p>Researchers have regarded inductive reasoning as one of the seven primary mental abilities that account for human intelligent behaviours. Researchers have also shown that inductive reasoning ability is one of the best predictors for academic performance. Modelling of inductive reasoning is therefore an important issue for providing adaptivity in…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980046640','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980046640"><span>Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simpson, Timothy W.</p> <p>1998-01-01</p> <p>The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997rain.rept.....M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997rain.rept.....M"><span>Differential Ablation of Cosmic Dust and Implications for the Relative Abundances of Atmospheric Metals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McNeil, W. J.</p> <p>1997-09-01</p> <p>Metals in the Earth's atmosphere are of interest and importance for several reasons. Emission lines from the sodium layer are used for wave front corrections in imaging space objects. The ionospheric metals present background contamination for remote sensing and tracking of space- born objects. Ionization during meteor showers may also interfere with communications. Although it is generally accepted that extraterrestrial material is the source of metals in the atmospheric, the relative abundances of mesospheric metals and ions present us with a conundrum. Lidar observations have consistently shown that the abundances of neutral metals in the atmospheric and the abundances of these metals in the meteoric material that falls to earth are significantly disproportionate. For example, the column density of neutral sodium is perhaps two orders of magnitude larger than that of calcium, while the abundances in meteorites are approximately equal. To complicate matters further, ion mass spectroscopy has shown that the abundances of the meteoric ions match reasonably well those in the meteorites. We present here a model that attempts to address these discrepancies. At the heart of the model is the concept of differential ablation, which suggests that more volatile metals sublimate earlier in the descent of a cosmic dust particle than do the less volatile components. The modeling is carried out comprehensively, beginning with the heating and vaporization of the dust particles. The vaporization rate is computed as a function of altitude from an ensemble of particles to give a deposition function which is then injected into a fully time-dependent kinetic code which allows for vertical diffusion and includes diurnal dependence through both the models of the major atmospheric components and through transport of the ions due to electric fields.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJNAO...7..750L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJNAO...7..750L"><span>An optimal design of wind turbine and ship structure based on neuro-response surface method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young</p> <p>2015-07-01</p> <p>The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26379239','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26379239"><span>Model-Based Reasoning in Humans Becomes Automatic with Training.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J</p> <p>2015-09-01</p> <p>Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NIMPA.889...39F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NIMPA.889...39F"><span>A Geant4 evaluation of the Hornyak button and two candidate detectors for the TREAT hodoscope</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark J.; McGregor, Douglas S.; Roberts, Jeremy A.</p> <p>2018-05-01</p> <p>The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminate background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Moreover, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20090025457&hterms=influence+Function&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2Binfluence%2BFunction','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20090025457&hterms=influence+Function&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2Binfluence%2BFunction"><span>Application of Reduced Order Transonic Aerodynamic Influence Coefficient Matrix for Design Optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pak, Chan-gi; Li, Wesley W.</p> <p>2009-01-01</p> <p>Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.5100L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.5100L"><span>Geological constraints for muon tomography: The world beyond standard rock</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lechmann, Alessandro; Mair, David; Ariga, Akitaka; Ariga, Tomoko; Ereditato, Antonio; Käser, Samuel; Nishiyama, Ryuichi; Scampoli, Paola; Vladymyrov, Mykhailo; Schlunegger, Fritz</p> <p>2017-04-01</p> <p>In present day muon tomography practice, one often encounters an experimental setup in which muons propagate several tens to a few hundreds of meters through a material to the detector. The goal of such an undertaking is usually centred on an attempt to make inferences from the measured muon flux to an anticipated subsurface structure. This can either be an underground interface geometry or a spatial material distribution. Inferences in this direction have until now mostly been done, thereby using the so called "standard rock" approximation. This includes a set of empirically determined parameters from several rocks found in the vicinity of physicist's laboratories. While this approach is reasonable to account for the effects of the tens of meters of soil/rock around a particle accelerator, we show, that for material thicknesses beyond that dimension, the elementary composition of the material (average atomic weight and atomic number) has a noticeable effect on the measured muon flux. Accordingly, the consecutive use of this approximation could potentially lead into a serious model bias, which in turn, might invalidate any tomographic inference, that base on this standard rock approximation. The parameters for standard rock are naturally close to a granitic (SiO2-rich) composition and thus can be safely used in such environments. As geophysical surveys are not restricted to any particular lithology, we investigated the effect of alternative rock compositions (carbonatic, basaltic and even ultramafic) and consequentially prefer to replace the standard rock approach with a dedicated geological investigation. Structural field data and laboratory measurements of density (He-Pycnometer) and composition (XRD) can be merged into an integrative geological model that can be used as an a priori constraint for the rock parameters of interest (density & composition) in the geophysical inversion. Modelling results show that when facing a non-granitic lithology the measured muon flux can vary up to 20-30%, in the case of carbonates and up to 100% for peridotites, compared to standard rock data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28649338','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28649338"><span>Estimating and interpreting migration of Amazonian forests using spatially implicit and semi-explicit neutral models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans</p> <p>2017-06-01</p> <p>With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1419477-geant4-evaluation-hornyak-button-two-candidate-detectors-treat-hodoscope','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1419477-geant4-evaluation-hornyak-button-two-candidate-detectors-treat-hodoscope"><span>A Geant4 evaluation of the Hornyak button and two candidate detectors for the TREAT hodoscope</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark; ...</p> <p>2018-02-05</p> <p>The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminatemore » background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Furthermore, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=biochemistry&pg=6&id=EJ1084615','ERIC'); return false;" href="https://eric.ed.gov/?q=biochemistry&pg=6&id=EJ1084615"><span>Using Order of Magnitude Calculations to Extend Student Comprehension of Laboratory Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Dean, Rob L.</p> <p>2015-01-01</p> <p>Author Rob Dean previously published an Illuminations article concerning "challenge" questions that encourage students to think imaginatively with approximate quantities, reasonable assumptions, and uncertain information. This article has promoted some interesting discussion, which has prompted him to present further examples. Examples…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Persson%2c+AND+2000&id=EJ351736','ERIC'); return false;" href="https://eric.ed.gov/?q=Persson%2c+AND+2000&id=EJ351736"><span>Mathematically Talented Males and Females and Achievement in the High School Sciences.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Benbow, Camilla Persson; Minor, Lola L.</p> <p>1986-01-01</p> <p>Using data on approximately 2,000 students drawn from three talent searches conducted by the Study of Mathematically Precocious Youth, this study investigated the relationship of possible sex differences in science achievement to sex differences in mathematical reasoning ability. (BS)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=arab+AND+sex&pg=6&id=ED170206','ERIC'); return false;" href="https://eric.ed.gov/?q=arab+AND+sex&pg=6&id=ED170206"><span>Students' Moral Reasoning as Related to Cultural Background and Educational Experience.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bar-Yam, Miriam; And Others</p> <p></p> <p>The relationship between moral development and cultural and educational background is examined. Approximately 120 Israeli youth representing different social classes, sex, religious affiliation, and educational experience were interviewed. The youth interviewed included urban middle and lower class students, Kibbutz-born, Youth Aliyah…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=child+AND+care+AND+option+AND+employee&pg=2&id=ED269149','ERIC'); return false;" href="https://eric.ed.gov/?q=child+AND+care+AND+option+AND+employee&pg=2&id=ED269149"><span>Employer Sponsored Child Care: Issues and Options.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Conroyd, S. Danielle</p> <p></p> <p>This presentation describes the child care center at Detroit's Mount Carmel Hospital, a division of the Sisters of Mercy Health Corporation employing approximately 1,550 women. Discussion focuses on reasons for establishing the center, facility acquisition, program details, program management, developmental philosophy, parent involvement, policy…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19680000226','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19680000226"><span>Computer program analyzes Buckling Of Shells Of Revolution with various wall construction, BOSOR</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Almroth, B. O.; Bushnell, D.; Sobel, L. H.</p> <p>1968-01-01</p> <p>Computer program performs stability analyses for a wide class of shells without unduly restrictive approximations. The program uses numerical integration, finite difference of finite element techniques to solve with reasonable accuracy almost any buckling problem for shells exhibiting orthotropic behavior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/3286','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/3286"><span>Quantitative Assessment of Factors Related to Customer Satisfaction with MoDOT in the Kansas City Area.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2008-01-01</p> <p>A mailed survey was sent to approximately twenty thousand citizens from District Four (Kansas City Area) residents in order to gather statistical evidence for : supporting or eliminating reasons for the satisfaction discrepancy between Kansas City Ar...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED138697.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED138697.pdf"><span>Program for Institutionalized Children, 1974-75.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ramsay, James G.</p> <p></p> <p>This program for institutionalized children, funded under the Elementary Secondary Education Act of 1965, involved approximately 2181 children in 35 institutions in the New York City metropolitan area. Children were institutionalized for a variety of reasons: they were orphaned, neglected, dependent, in need of supervision, or emotionally…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910012475','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910012475"><span>Using fuzzy logic to integrate neural networks and knowledge-based systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yen, John</p> <p>1991-01-01</p> <p>Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2771264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2771264"><span>Promoting the Self-Regulation of Clinical Reasoning Skills in Nursing Students</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuiper, R; Pesut, D; Kautz, D</p> <p>2009-01-01</p> <p>Aim: The purpose of this paper is to describe the research surrounding the theories and models the authors united to describe the essential components of clinical reasoning in nursing practice education. The research was conducted with nursing students in health care settings through the application of teaching and learning strategies with the Self-Regulated Learning Model (SRL) and the Outcome-Present-State-Test (OPT) Model of Reflective Clinical Reasoning. Standardized nursing languages provided the content and clinical vocabulary for the clinical reasoning task. Materials and Methods: This descriptive study described the application of the OPT model of clinical reasoning, use of nursing language content, and reflective journals based on the SRL model with 66 undergraduate nursing students over an 8 month period of time. The study tested the idea that self-regulation of clinical reasoning skills can be developed using self-regulation theory and the OPT model. Results: This research supports a framework for effective teaching and learning methods to promote and document learner progress in mastering clinical reasoning skills. Self-regulated Learning strategies coupled with the OPT model suggest benefits of self-observation and self-monitoring during clinical reasoning activities, and pinpoints where guidance is needed for the development of cognitive and metacognitive awareness. Recommendations and Conclusions: Thinking and reasoning about the complexities of patient care needs requires attention to the content, processes and outcomes that make a nursing care difference. These principles and concepts are valuable to clinical decision making for nurses globally as they deal with local, regional, national and international health care issues. PMID:19888432</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15551724','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15551724"><span>Do South African former detainees experience post-traumatic stress? Circumventing the demand characteristics of psychological assessment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kagee, Ashraf</p> <p>2004-09-01</p> <p>Most research on persons subjected to physical or psychological torture for political reasons has framed this experience as traumatic, with the sequelae approximating the diagnostic criteria of post-traumatic stress disorder (PTSD). Yet, critiques of the trauma model have called attention to the fact that PTSD represents a Western conceptualization of the concerns of persons who have survived stressful experiences. In order to determine whether symptoms of traumatization are salient psychiatric phenomena for South African former detainees, semi-structured qualitative interviews were conducted with 20 respondents who were detained and tortured for political reasons during the apartheid era. Interviews were transcribed and analysed for thematic content using a grounded theory approach. Results showed that although the main concerns expressed were unrelated to traumatization, participants also indicated that they experienced symptoms of post-traumatic stress. These data suggest that although too great a focus on traumatic responses may be misplaced, it remains important to consider the possibility that former detainees may exhibit symptoms of this nature. Consequently, critiques of the trauma discourse as a Western phenomenon need to be tempered with evidence of the lived reality of psychological sequelae experienced by this population.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..SHK.V4001R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..SHK.V4001R"><span>An experimental study of an explosively driven flat plate launcher</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rae, Philip; Haroz, Erik; Armstrong, Chris; Perry, Lee; M Division Team</p> <p>2017-06-01</p> <p>For some upcoming experiments it is desired to impact a large explosive assembly with one or more moderate diameter flat metal plates traveling at high velocity (2-3 km s-1). The time of arrival of these plates will need to carefully controlled and delayed (i.e. the time(s) of arrival known to approximately a microsecond). For this reason, producing a flyer plate from more traditional gun assemblies is not possible. Previous researchers have demonstrated the ability to throw reasonably flat metal flyers from the so-called Forest flyer geometry. The defining characteristics of this design are a carefully controlled reduction in explosive area from a larger explosive plane-wave-lens and booster pad to a smaller flyer plate to improve the planarity of the drive available and an air gap between the explosive booster and the plate to reduce the peak tensile stresses generated in the plate to suppress spalling. This experimental series comprised a number of different design variants and plate and explosive drive materials. The aim was to calibrate a predictive computational modeling capability on this kind of system in preparation for later more radical design ideas best tested in a computer before undertaking the expensive business of construction.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090040760','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090040760"><span>Probabilistic Reasoning for Robustness in Automated Planning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schaffer, Steven; Clement, Bradley; Chien, Steve</p> <p>2007-01-01</p> <p>A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=inductive+AND+reasoning&pg=2&id=EJ1008301','ERIC'); return false;" href="https://eric.ed.gov/?q=inductive+AND+reasoning&pg=2&id=EJ1008301"><span>Model-Based Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ifenthaler, Dirk; Seel, Norbert M.</p> <p>2013-01-01</p> <p>In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3540110','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3540110"><span>Medicare Part D Claims Rejections for Nursing Home Residents, 2006 to 2010</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Stevenson, David G.; Keohane, Laura M.; Mitchell, Susan L.; Zarowitz, Barbara J.; Huskamp, Haiden A.</p> <p>2013-01-01</p> <p>Objectives Much has been written about trends in Medicare Part D formulary design and consumers’ choice of plans, but little is known about the magnitude of claims rejections or their clinical and administrative implications. Our objective was to study the overall rate at which Part D claims are rejected, whether these rates differ across plans, drugs, and medication classes, and how these rejection rates and reasons have evolved over time. Study Design and Methods We performed descriptive analyses of data on paid and rejected Part D claims submitted by 1 large national long-term care pharmacy from 2006 to 2010. In each of the 5 study years, data included approximately 450,000 Medicare beneficiaries living in long-term care settings with approximately 4 million Part D drug claims. Claims rejection rates and reasons for rejection are tabulated for each study year at the plan, drug, and class levels. Results Nearly 1 in 6 drug claims was rejected during the first 5 years of the Medicare Part D program, and this rate has increased over time. Rejection rates and reasons for rejection varied substantially across drug products and Part D plans. Moreover, the reasons for denials evolved over our study period. Coverage has become less of a factor in claims rejections than it was initially and other formulary tools such as drug utilization review, quantity-related coverage limits, and prior authorization are increasingly used to deny claims. Conclusions Examining claims rejection rates can provide important supplemental information to assess plans’ generosity of coverage and to identify potential areas of concern. PMID:23145808</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRC..123.2461W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRC..123.2461W"><span>Mapping Dependence Between Extreme Rainfall and Storm Surge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Wenyan; McInnes, Kathleen; O'Grady, Julian; Hoeke, Ron; Leonard, Michael; Westra, Seth</p> <p>2018-04-01</p> <p>Dependence between extreme storm surge and rainfall can have significant implications for flood risk in coastal and estuarine regions. To supplement limited observational records, we use reanalysis surge data from a hydrodynamic model as the basis for dependence mapping, providing information at a resolution of approximately 30 km along the Australian coastline. We evaluated this approach by comparing the dependence estimates from modeled surge to that calculated using historical surge records from 79 tide gauges around Australia. The results show reasonable agreement between the two sets of dependence values, with the exception of lower seasonal variation in the modeled dependence values compared to the observed data, especially at locations where there are multiple processes driving extreme storm surge. This is due to the combined impact of local bathymetry as well as the resolution of the hydrodynamic model and its meteorological inputs. Meteorological drivers were also investigated for different combinations of extreme rainfall and surge—namely rain-only, surge-only, and coincident extremes—finding that different synoptic patterns are responsible for each combination. The ability to supplement observational records with high-resolution modeled surge data enables a much more precise quantification of dependence along the coastline, strengthening the physical basis for assessments of flood risk in coastal regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19894850','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19894850"><span>Shock-induced bubble jetting into a viscous fluid with application to tissue injury in shock-wave lithotripsy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Freund, J B; Shukla, R K; Evan, A P</p> <p>2009-11-01</p> <p>Shock waves in liquids are known to cause spherical gas bubbles to rapidly collapse and form strong re-entrant jets in the direction of the propagating shock. The interaction of these jets with an adjacent viscous liquid is investigated using finite-volume simulation methods. This configuration serves as a model for tissue injury during shock-wave lithotripsy, a medical procedure to remove kidney stones. In this case, the viscous fluid provides a crude model for the tissue. It is found that for viscosities comparable to what might be expected in tissue, the jet that forms upon collapse of a small bubble fails to penetrate deeply into the viscous fluid "tissue." A simple model reproduces the penetration distance versus viscosity observed in the simulations and leads to a phenomenological model for the spreading of injury with multiple shocks. For a reasonable selection of a single efficiency parameter, this model is able to reproduce in vivo observations of an apparent 1000-shock threshold before wide-spread tissue injury occurs in targeted kidneys and the approximate extent of this injury after a typical clinical dose of 2000 shock waves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2787081','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2787081"><span>Shock-induced bubble jetting into a viscous fluid with application to tissue injury in shock-wave lithotripsy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Freund, J. B.; Shukla, R. K.; Evan, A. P.</p> <p>2009-01-01</p> <p>Shock waves in liquids are known to cause spherical gas bubbles to rapidly collapse and form strong re-entrant jets in the direction of the propagating shock. The interaction of these jets with an adjacent viscous liquid is investigated using finite-volume simulation methods. This configuration serves as a model for tissue injury during shock-wave lithotripsy, a medical procedure to remove kidney stones. In this case, the viscous fluid provides a crude model for the tissue. It is found that for viscosities comparable to what might be expected in tissue, the jet that forms upon collapse of a small bubble fails to penetrate deeply into the viscous fluid “tissue.” A simple model reproduces the penetration distance versus viscosity observed in the simulations and leads to a phenomenological model for the spreading of injury with multiple shocks. For a reasonable selection of a single efficiency parameter, this model is able to reproduce in vivo observations of an apparent 1000-shock threshold before wide-spread tissue injury occurs in targeted kidneys and the approximate extent of this injury after a typical clinical dose of 2000 shock waves. PMID:19894850</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EPJWC..2604005L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EPJWC..2604005L"><span>Development of EOS data for granular material like sand by using micromodels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Larcher, M.; Gebbeken, N.</p> <p>2012-08-01</p> <p>Detonations in soil can occur due to several reasons: e.g. land mines or bombs from the Second World War. Soil is also often used as a protective barrier. In all cases the behaviour of soil loaded by shock waves is important. The simulation of shock wave loaded soil using hydro-codes like AUTODYN needs a failure model as well as an equation of state (EOS). The parameters for these models are often not known. The popular material law for sand from Laine and Sandvik [1], e.g., is a first approximation, but it can only be used for dry sand with a certain grain grading. The parameters porosity, grain grading, and humidity have a big influence on the material behaviour of cohesive soils. Micro-mechanic models can be used to develop the material behaviour of granular materials. EOS data can be obtained by numerically loading micro-mechanically modelled grains and measuring the density under a certain pressure in the finite element model. The influence of porosity, grain grading, and humidity can be easily investigated. EOS data are determined in this work for cohesive soils depending on these parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28041892','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28041892"><span>An alternative derivation of the stationary distribution of the multivariate neutral Wright-Fisher model for low mutation rates with a view to mutation rate estimation from site frequency data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schrempf, Dominik; Hobolth, Asger</p> <p>2017-04-01</p> <p>Recently, Burden and Tang (2016) provided an analytical expression for the stationary distribution of the multivariate neutral Wright-Fisher model with low mutation rates. In this paper we present a simple, alternative derivation that illustrates the approximation. Our proof is based on the discrete multivariate boundary mutation model which has three key ingredients. First, the decoupled Moran model is used to describe genetic drift. Second, low mutation rates are assumed by limiting mutations to monomorphic states. Third, the mutation rate matrix is separated into a time-reversible part and a flux part, as suggested by Burden and Tang (2016). An application of our result to data from several great apes reveals that the assumption of stationarity may be inadequate or that other evolutionary forces like selection or biased gene conversion are acting. Furthermore we find that the model with a reversible mutation rate matrix provides a reasonably good fit to the data compared to the one with a non-reversible mutation rate matrix. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26771896','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26771896"><span>Predicting future protection of respirator users: Statistical approaches and practical implications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Chengcheng; Harber, Philip; Su, Jing</p> <p>2016-01-01</p> <p>The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=311750&keyword=water&subject=water%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=12/22/2011&dateendpublishedpresented=12/22/2016&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=311750&keyword=water&subject=water%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=12/22/2011&dateendpublishedpresented=12/22/2016&sortby=pubdateyear"><span>A modeling study examining the impact of nutrient boundaries ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>A mass balance eutrophication model, Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to describe nitrogen, phosphorus and primary production in the Louisiana shelf of the Gulf of Mexico. Features of this model include bi-directional boundary exchanges, an empirical site-specific light attenuation equation, estimates of 56 river loads and atmospheric loads. The model was calibrated for 2006 by comparing model output to observations in zones that represent different locations in the Gulf. The model exhibited reasonable skill in simulating the phosphorus and nitrogen field data and primary production observations. The model was applied to generate a nitrogen mass balance estimate, to perform sensitivity analysis to compare the importance of the nutrient boundary concentrations versus the river loads on nutrient concentrations and primary production within the shelf, and to provide insight into the relative importance of different limitation factors on primary production. The mass budget showed the importance of the rivers as the major external nitrogen source while the atmospheric load contributed approximately 2% of the total external load. Sensitivity analysis showed the importance of accurate estimates of boundary nitrogen concentrations on the nitrogen levels on the shelf, especially at regions further away from the river influences. The boundary nitrogen concentrations impacted primary production less than nitrogen concent</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21452850-double-ring-algorithm-modeling-solar-active-regions-unifying-kinematic-dynamo-models-surface-flux-transport-simulations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21452850-double-ring-algorithm-modeling-solar-active-regions-unifying-kinematic-dynamo-models-surface-flux-transport-simulations"><span>A DOUBLE-RING ALGORITHM FOR MODELING SOLAR ACTIVE REGIONS: UNIFYING KINEMATIC DYNAMO MODELS AND SURFACE FLUX-TRANSPORT SIMULATIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu</p> <p></p> <p>The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed {alpha}-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithmmore » for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15348304','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15348304"><span>Replacing the nucleus pulposus of the intervertebral disk: prediction of suitable properties of a replacement material using finite element analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meakin, J R</p> <p>2001-03-01</p> <p>An axisymmetric finite element model of a human lumbar disk was developed to investigate the properties required of an implant to replace the nucleus pulposus. In the intact disk, the nucleus was modeled as a fluid, and the annulus as an elastic solid. The Young's modulus of the annulus was determined empirically by matching model predictions to experimental results. The model was checked for sensitivity to the input parameter values and found to give reasonable behavior. The model predicted that removal of the nucleus would change the response of the annulus to compression. This prediction was consistent with experimental results, thus validating the model. Implants to fill the cavity produced by nucleus removal were modeled as elastic solids. The Poisson's ratio was fixed at 0.49, and the Young's modulus was varied from 0.5 to 100 MPa. Two sizes of implant were considered: full size (filling the cavity) and small size (smaller than the cavity). The model predicted that a full size implant would reverse the changes to annulus behavior, but a smaller implant would not. By comparing the stress distribution in the annulus, the ideal Young's modulus was predicted to be approximately 3 MPa. These predictions have implications for current nucleus implant designs. Copyright 2001 Kluwer Academic Publishers</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800041776&hterms=economics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DTitle%26N%3D0%26No%3D50%26Ntt%3Deconomics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800041776&hterms=economics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DTitle%26N%3D0%26No%3D50%26Ntt%3Deconomics"><span>Architectures and economics for pervasive broadband satellite networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Staelin, D. H.; Harvey, R. L.</p> <p>1979-01-01</p> <p>The size of a satellite network necessary to provide pervasive high-data-rate business communications is estimated, and one possible configuration is described which could interconnect most organizations in the United States. Within an order of magnitude, such a network might reasonably have a capacity equivalent to 10,000 simultaneous 3-Mbps channels, and rely primarily upon a cluster of approximately 3-5 satellites in a single orbital slot. Nominal prices for 3-6 Mbps video conference services might then be approximately $2000 monthly lease charge plus perhaps 70 cents per minute one way.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JChPh.141u4503F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JChPh.141u4503F"><span>Simple heuristic for the viscosity of polydisperse hard spheres</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Farr, Robert S.</p> <p>2014-12-01</p> <p>We build on the work of Mooney [Colloids Sci. 6, 162 (1951)] to obtain an heuristic analytic approximation to the viscosity of a suspension any size distribution of hard spheres in a Newtonian solvent. The result agrees reasonably well with rheological data on monodispserse and bidisperse hard spheres, and also provides an approximation to the random close packing fraction of polydisperse spheres. The implied packing fraction is less accurate than that obtained by Farr and Groot [J. Chem. Phys. 131(24), 244104 (2009)], but has the advantage of being quick and simple to evaluate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23005886','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23005886"><span>Asymptotic response of observables from divergent weak-coupling expansions: a fractional-calculus-assisted Padé technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dhatt, Sharmistha; Bhattacharyya, Kamal</p> <p>2012-08-01</p> <p>Appropriate constructions of Padé approximants are believed to provide reasonable estimates of the asymptotic (large-coupling) amplitude and exponent of an observable, given its weak-coupling expansion to some desired order. In many instances, however, sequences of such approximants are seen to converge very poorly. We outline here a strategy that exploits the idea of fractional calculus to considerably improve the convergence behavior. Pilot calculations on the ground-state perturbative energy series of quartic, sextic, and octic anharmonic oscillators reveal clearly the worth of our endeavor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3928969','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3928969"><span>Interaction function of oscillating coupled neurons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dodla, Ramana; Wilson, Charles J.</p> <p>2013-01-01</p> <p>Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19630011444','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19630011444"><span>Fusion Propulson System Requirements for an Interstellar Probe</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spencer, D. F.</p> <p>1963-01-01</p> <p>An examination of the engine constraints for a fusion-propelled vehicle indicates that minimum flight times for a probe to a 5 light-year star will be approximately 50 years. The principal restraint on the vehicle is the radiator weight and size necessary to dissipate the heat which enters the chamber walls from the fusion plasma. However, it is interesting, at least theoretically, that the confining magnetic field strength is of reasonable magnitude, 2 to 3 x 10(exp5) gauss, and the confinement time is approximately 0.1 sec.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSEdT..27...45H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSEdT..27...45H"><span>Stimulating Scientific Reasoning with Drawing-Based Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank</p> <p>2018-02-01</p> <p>We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27872883','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27872883"><span>Model fitting data from syllogistic reasoning experiments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hattori, Masasi</p> <p>2016-12-01</p> <p>The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29357112','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29357112"><span>Icon arrays help younger children's proportional reasoning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei</p> <p>2018-06-01</p> <p>We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3081210','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3081210"><span>Reasons for low influenza vaccination coverage – a cross-sectional survey in Poland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kardas, Przemyslaw; Zasowska, Anna; Dec, Joanna; Stachurska, Magdalena</p> <p>2011-01-01</p> <p>Aim To assess the reasons for low influenza vaccination coverage in Poland, including knowledge of influenza and attitudes toward influenza vaccination. Methods This was a cross-sectional, anonymous, self-administered survey in primary care patients in Lodzkie voivodship (central Poland). The study participants were adults who visited their primary care physicians for various reasons from January 1 to April 30, 2007. Results Six hundred and forty participants completed the survey. In 12 months before the study, 20.8% participants had received influenza vaccination. The most common reasons listed by those who had not been vaccinated were good health (27.6%), lack of trust in vaccination effectiveness (16.8%), and the cost of vaccination (9.7%). The most common source of information about influenza vaccination were primary care physicians (46.6%). Despite reasonably good knowledge of influenza, as many as approximately 20% of participants could not point out any differences between influenza and other viral respiratory tract infections. Conclusions The main reasons for low influenza vaccination coverage in Poland were patients’ misconceptions and the cost of vaccination. Therefore, free-of-charge vaccination and more effective informational campaigns are needed, with special focus on high-risk groups. PMID:21495194</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006IJSEd..28.1347P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006IJSEd..28.1347P"><span>A Comparison of Reasoning Processes in a Collaborative Modelling Environment: Learning about genetics problems using virtual chat</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pata, Kai; Sarapuu, Tago</p> <p>2006-09-01</p> <p>This study investigated the possible activation of different types of model-based reasoning processes in two learning settings, and the influence of various terms of reasoning on the learners’ problem representation development. Changes in 53 students’ problem representations about genetic issue were analysed while they worked with different modelling tools in a synchronous network-based environment. The discussion log-files were used for the “microgenetic” analysis of reasoning types. For studying the stages of students’ problem representation development, individual pre-essays and post-essays and their utterances during two reasoning phases were used. An approach for mapping problem representations was developed. Characterizing the elements of mental models and their reasoning level enabled the description of five hierarchical categories of problem representations. Learning in exploratory and experimental settings was registered as the shift towards more complex stages of problem representations in genetics. The effect of different types of reasoning could be observed as the divergent development of problem representations within hierarchical categories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25487420','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25487420"><span>Taking stock of medication wastage: Unused medications in US households.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Law, Anandi V; Sakharkar, Prashant; Zargarzadeh, Amir; Tai, Bik Wai Bilvick; Hess, Karl; Hata, Micah; Mireles, Rudolph; Ha, Carolyn; Park, Tony J</p> <p>2015-01-01</p> <p>Despite the potential deleterious impact on patient safety, environmental safety and health care expenditures, the extent of unused prescription medications in US households and reasons for nonuse remain unknown. To estimate the extent, type and cost of unused medications and the reasons for their nonuse among US households. A cross-sectional, observational two-phased study was conducted using a convenience sample in Southern California. A web-based survey (Phase I, n = 238) at one health sciences institution and paper-based survey (Phase II, n = 68) at planned drug take-back events at three community pharmacies were conducted. The extent, type, and cost of unused medications and the reasons for their nonuse were collected. Approximately 2 of 3 prescription medications were reported unused; disease/condition improved (42.4%), forgetfulness (5.8%) and side effects (6.5%) were reasons cited for their nonuse. "Throwing medications in the trash" was found being the common method of disposal (63%). In phase I, pain medications (23.3%) and antibiotics (18%) were most commonly reported as unused, whereas in Phase II, 17% of medications for chronic conditions (hypertension, diabetes, cholesterol, heart disease) and 8.3% for mental health problems were commonly reported as unused. Phase II participants indicated pharmacy as a preferred location for drug disposal. The total estimated cost for unused medications was approximately $59,264.20 (average retail Rx price) to $152,014.89 (AWP) from both phases, borne largely by private health insurance. When extrapolated to a national level, it was approximately $2.4B for elderly taking five prescription medications to $5.4B for the 52% of US adults who take one prescription medication daily. Two out of three dispensed medications were unused, with national projected costs ranging from $2.4B to $5.4B. This wastage raises concerns about adherence, cost and safety; additionally, it points to the need for public awareness and policy to reduce wastage. Pharmacists can play an important role by educating patients both on appropriate medication use and disposal. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...859...29N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...859...29N"><span>Magnetohydrodynamic Simulations of a Plunging Black Hole into a Molecular Cloud</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nomura, Mariko; Oka, Tomoharu; Yamada, Masaya; Takekawa, Shunya; Ohsuga, Ken; Takahashi, Hiroyuki R.; Asahina, Yuta</p> <p>2018-05-01</p> <p>Using two-dimensional magnetohydrodynamic simulations, we investigated the gas dynamics around a black hole (BH) plunging into a molecular cloud. In these calculations, we assumed a parallel-magnetic-field layer in the cloud. The size of the accelerated region is far larger than the Bondi–Hoyle–Lyttleton radius, being approximately inversely proportional to the Alfvén Mach number for the plunging BH. Our results successfully reproduce the “Y” shape in position–velocity maps of the “Bullet” in the W44 molecular cloud. The size of the Bullet is also reproduced within an order of magnitude using a reasonable parameter set. This consistency supports the shooting model of the Bullet, according to which an isolated BH plunged into a molecular cloud to form a compact broad-velocity-width feature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4870817','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4870817"><span>Factors Associated With Smoking Behavior Among Operating Engineers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Choi, Seung Hee; Pohl, Joanne M.; Terrell, Jeffrey E.; Redman, Richard W.</p> <p>2016-01-01</p> <p>Although disparities in smoking prevalence between white collar workers and blue collar workers have been documented, reasons for these disparities have not been well studied. The objective of this study was to determine variables associated with smoking among Operating Engineers, using the Health Promotion Model as a guide. With cross-sectional data from a convenience sample of 498 Operating Engineers, logistic regression was used to determine personal and health behaviors associated with smoking. Approximately 29% of Operating Engineers currently smoked cigarettes. Multivariate analyses showed that younger age, unmarried, problem drinking, physical inactivity, and a lower body mass index were associated with smoking. Operating Engineers were at high risk of smoking, and smokers were more likely to engage in other risky health behaviors, which supports bundled health behavior interventions. PMID:23957830</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJMPE..2650012Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJMPE..2650012Y"><span>RPA treatment of a motivated QCD Hamiltonian in the SO(4) (2 + 1)-flavor limit: Light and strange mesons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yepez-Martinez, Tochtli; Civitarese, Osvaldo; Hess, Peter O.</p> <p></p> <p>The SO(4) symmetry of a sector of the quantum chromodynamics (QCD) Hamiltonian was analyzed in a previous work. The numerical calculations were then restricted to a particle-hole (ph) space and the comparison with experimental data was reasonable in spite of the complexity of the QCD spectrum at low energy. Here on, we continue along this line of research and show our new results of the treatment of the QCD Hamiltonian in the SO(4) representation, including ground state correlations by means of the Random Phase Approximation (RPA). We are able to identify, within this model, states which may be associated to physical pseudo-scalar and vector mesons, like η,η‧,K,ρ,ω,ϕ, as well as the pion (π).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23957830','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23957830"><span>Factors associated with smoking among operating engineers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Choi, Seung Hee; Pohl, Joanne M; Terrell, Jeffrey E; Redman, Richard W; Duffy, Sonia A</p> <p>2013-09-01</p> <p>Although disparities in smoking prevalence between white collar workers and blue collar workers have been documented, reasons for these disparities have not been well studied. The objective of this study was to determine variables associated with smoking among Operating Engineers, using the Health Promotion Model as a guide. With cross-sectional data from a convenience sample of 498 Operating Engineers, logistic regression was used to determine personal and health behaviors associated with smoking. Approximately 29% of Operating Engineers currently smoked cigarettes. Multivariate analyses showed that younger age, unmarried, problem drinking, physical inactivity, and a lower body mass index were associated with smoking. Operating Engineers were at high risk of smoking, and smokers were more likely to engage in other risky health behaviors, which supports bundled health behavior interventions. Copyright 2013, SLACK Incorporated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28739991','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28739991"><span>[The Prevalence and Risk Factors of Dementia in Centenarians].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arai, Yasumichi</p> <p>2017-07-01</p> <p>Centenarians are less susceptible to the diseases, functional losses and dependencies related to old age than the general public, and are therefore regarded as model cases of successful aging. For this reason, an important focus of the study of centenarians is their relative resilience to age-related cognitive decline or dementia. In the Tokyo Centenarian Study, we found approximately 60% of centenarians to have dementia; however, supercentenarians (those people living at least 110 years) maintained normal cognitive function at 100 years of age. Our preliminary data also demonstrated extremely low frequencies of the apolipoprotein E4 allele in supercentenarians. Moreover, postmortem brain samples from supercentenarians demonstrated relatively mild age-related neuropathological findings. Therefore, a more extensive investigation of supercentenarian populations might provide insight into successful brain aging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1982CoPhC..26..377M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1982CoPhC..26..377M"><span>Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.</p> <p>1982-06-01</p> <p>In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170011280&hterms=low&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dlow','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170011280&hterms=low&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dlow"><span>Possible Lack of Low-Mass Meteoroids in the Earth's Meteoroid Flux Due to Space Erosion?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rubincam, David Parry</p> <p>2017-01-01</p> <p>The Earth's cumulative meteoroid flux, as found by Halliday et al. (1996), may have a shallower slope for meteoroid masses in the range 0.1-2.5 kg compared to those with masses greater than 2.5 kg when plotted on a log flux vs. log mass graph. This would indicate a lack of low-mass objects. While others such as Ceplecha (1992) find no shallow slope, there may be a reason for a lack of 0.1-2.5 kg meteoroids which supports Halliday et al.'s finding. Simple models show that a few centimeters of space erosion in stony meteoroids can reproduce the bend in Halliday et al.'s curve at approximately 2.5 kg and give the shallower slope.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22994991','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22994991"><span>An integrated model of clinical reasoning: dual-process theory of cognition and metacognition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marcum, James A</p> <p>2012-10-01</p> <p>Clinical reasoning is an important component for providing quality medical care. The aim of the present paper is to develop a model of clinical reasoning that integrates both the non-analytic and analytic processes of cognition, along with metacognition. The dual-process theory of cognition (system 1 non-analytic and system 2 analytic processes) and the metacognition theory are used to develop an integrated model of clinical reasoning. In the proposed model, clinical reasoning begins with system 1 processes in which the clinician assesses a patient's presenting symptoms, as well as other clinical evidence, to arrive at a differential diagnosis. Additional clinical evidence, if necessary, is acquired and analysed utilizing system 2 processes to assess the differential diagnosis, until a clinical decision is made diagnosing the patient's illness and then how best to proceed therapeutically. Importantly, the outcome of these processes feeds back, in terms of metacognition's monitoring function, either to reinforce or to alter cognitive processes, which, in turn, enhances synergistically the clinician's ability to reason quickly and accurately in future consultations. The proposed integrated model has distinct advantages over other models proposed in the literature for explicating clinical reasoning. Moreover, it has important implications for addressing the paradoxical relationship between experience and expertise, as well as for designing a curriculum to teach clinical reasoning skills. © 2012 Blackwell Publishing Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/12528','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/12528"><span>On-Orbit Collision Hazard Analysis in Low Earth Orbit Using the Poisson Probability Distribution (Version 1.0)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1992-08-26</p> <p>This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=283857','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=283857"><span>Genome-wide association with delayed puberty in swine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>An improvement in the proportion of gilts entering the herd that farrow a litter would increase overall herd performance and profitability. A significant proportion (10-30%) of gilts that enter the herd never farrow a litter; reproductive reasons account for approximately a third of gilt removals, w...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Schlechty&pg=2&id=ED301113','ERIC'); return false;" href="https://eric.ed.gov/?q=Schlechty&pg=2&id=ED301113"><span>School-University Partnerships in Action: Concepts, Cases,</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sirotnik, Kenneth A., Ed.; Goodlad, John I., Ed.</p> <p></p> <p>A general paradigm for ideal collaboration between schools and universities is proposed. It is based on a mutually collaborative arrangement between equal partners working together to meet self-interests while solving common problems. It is suggested that reasonable approximations to this ideal have great potential to effect significant…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3796771','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3796771"><span>Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin</p> <p>2013-01-01</p> <p>Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900019726','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900019726"><span>Approximation algorithms for planning and control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Boddy, Mark; Dean, Thomas</p> <p>1989-01-01</p> <p>A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24076381','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24076381"><span>Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin</p> <p>2013-12-01</p> <p>Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22391374-double-parton-effects-jets-large-rapidity-separation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22391374-double-parton-effects-jets-large-rapidity-separation"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Szczurek, Antoni; University of Rzeszów; Cisek, Anna</p> <p></p> <p>We discuss production of four jets pp → jjjjX with at least two jets with large rapidity separation in proton-proton collisions at the LHC through the mechanism of double-parton scattering (DPS). The cross section is calculated in a factorizaed approximation. Each hard subprocess is calculated in LO collinear approximation. The LO pQCD calculations are shown to give a reasonably good descritption of CMS and ATLAS data on inclusive jet production. It is shown that relative contribution of DPS is growing with increasing rapidity distance between the most remote jets, center-of-mass energy and with decreasing (mini)jet transverse momenta. We show alsomore » result for angular azimuthal dijet correlations calculated in the framework of k{sub t} -factorization approximation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22490459-electronic-properties-excess-cr-fe-site-fecr-sub-se-alloy','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22490459-electronic-properties-excess-cr-fe-site-fecr-sub-se-alloy"><span>Electronic properties of excess Cr at Fe site in FeCr{sub 0.02}Se alloy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kumar, Sandeep, E-mail: sandeepk.iitb@gmail.com; Singh, Prabhakar P.</p> <p>2015-06-24</p> <p>We have studied the effect of substitution of transition-metal chromium (Cr) in excess on Fe sub-lattice in the electronic structure of iron-selenide alloys, FeCr{sub 0.02}Se. In our calculations, we used Korringa-Kohn-Rostoker coherent potential approximation method in the atomic sphere approximation (KKR-ASA-CPA). We obtained different band structure of this alloy with respect to the parent FeSe and this may be reason of changing their superconducting properties. We did unpolarized calculations for FeCr{sub 0.02}Se alloy in terms of density of states (DOS) and Fermi surfaces. The local density approximation (LDA) is used in terms of exchange correlation potential.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.948a2001M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.948a2001M"><span>The effect of creative problem solving on students’ mathematical adaptive reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Muin, A.; Hanifah, S. H.; Diwidian, F.</p> <p>2018-01-01</p> <p>This research was conducted to analyse the effect of creative problem solving (CPS) learning model on the students’ mathematical adaptive reasoning. The method used in this study was a quasi-experimental with randomized post-test only control group design. Samples were taken as many as two classes by cluster random sampling technique consisting of experimental class (CPS) as many as 40 students and control class (conventional) as many as 40 students. Based on the result of hypothesis testing with the t-test at the significance level of 5%, it was obtained that significance level of 0.0000 is less than α = 0.05. This shows that the students’ mathematical adaptive reasoning skills who were taught by CPS model were higher than the students’ mathematical adaptive reasoning skills of those who were taught by conventional model. The result of this research showed that the most prominent aspect of adaptive reasoning that could be developed through a CPS was inductive intuitive. Two aspects of adaptive reasoning, which were inductive intuitive and deductive intuitive, were mostly balanced. The different between inductive intuitive and deductive intuitive aspect was not too big. CPS model can develop student mathematical adaptive reasoning skills. CPS model can facilitate development of mathematical adaptive reasoning skills thoroughly.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>